Nov 25 09:31:34 localhost kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 25 09:31:34 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 25 09:31:34 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:31:34 localhost kernel: BIOS-provided physical RAM map:
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 25 09:31:34 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 25 09:31:34 localhost kernel: NX (Execute Disable) protection: active
Nov 25 09:31:34 localhost kernel: APIC: Static calls initialized
Nov 25 09:31:34 localhost kernel: SMBIOS 2.8 present.
Nov 25 09:31:34 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 25 09:31:34 localhost kernel: Hypervisor detected: KVM
Nov 25 09:31:34 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 25 09:31:34 localhost kernel: kvm-clock: using sched offset of 12029595408 cycles
Nov 25 09:31:34 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 25 09:31:34 localhost kernel: tsc: Detected 2800.000 MHz processor
Nov 25 09:31:34 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 25 09:31:34 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 25 09:31:34 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 25 09:31:34 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 25 09:31:34 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 25 09:31:34 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 25 09:31:34 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 25 09:31:34 localhost kernel: Using GB pages for direct mapping
Nov 25 09:31:34 localhost kernel: RAMDISK: [mem 0x2ed25000-0x3368afff]
Nov 25 09:31:34 localhost kernel: ACPI: Early table checksum verification disabled
Nov 25 09:31:34 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 25 09:31:34 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:31:34 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:31:34 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:31:34 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 25 09:31:34 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:31:34 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 25 09:31:34 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 25 09:31:34 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 25 09:31:34 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 25 09:31:34 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 25 09:31:34 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 25 09:31:34 localhost kernel: No NUMA configuration found
Nov 25 09:31:34 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 25 09:31:34 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 25 09:31:34 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 25 09:31:34 localhost kernel: Zone ranges:
Nov 25 09:31:34 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 25 09:31:34 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 25 09:31:34 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 09:31:34 localhost kernel:   Device   empty
Nov 25 09:31:34 localhost kernel: Movable zone start for each node
Nov 25 09:31:34 localhost kernel: Early memory node ranges
Nov 25 09:31:34 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 25 09:31:34 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 25 09:31:34 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 25 09:31:34 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 25 09:31:34 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 25 09:31:34 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 25 09:31:34 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 25 09:31:34 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 25 09:31:34 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 25 09:31:34 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 25 09:31:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 25 09:31:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 25 09:31:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 25 09:31:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 25 09:31:34 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 25 09:31:34 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 25 09:31:34 localhost kernel: TSC deadline timer available
Nov 25 09:31:34 localhost kernel: CPU topo: Max. logical packages:   8
Nov 25 09:31:34 localhost kernel: CPU topo: Max. logical dies:       8
Nov 25 09:31:34 localhost kernel: CPU topo: Max. dies per package:   1
Nov 25 09:31:34 localhost kernel: CPU topo: Max. threads per core:   1
Nov 25 09:31:34 localhost kernel: CPU topo: Num. cores per package:     1
Nov 25 09:31:34 localhost kernel: CPU topo: Num. threads per package:   1
Nov 25 09:31:34 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 25 09:31:34 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 25 09:31:34 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 25 09:31:34 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 25 09:31:34 localhost kernel: Booting paravirtualized kernel on KVM
Nov 25 09:31:34 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 25 09:31:34 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 25 09:31:34 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 25 09:31:34 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 25 09:31:34 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 25 09:31:34 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 25 09:31:34 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:31:34 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 25 09:31:34 localhost kernel: random: crng init done
Nov 25 09:31:34 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 25 09:31:34 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 25 09:31:34 localhost kernel: Fallback order for Node 0: 0 
Nov 25 09:31:34 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 25 09:31:34 localhost kernel: Policy zone: Normal
Nov 25 09:31:34 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 25 09:31:34 localhost kernel: software IO TLB: area num 8.
Nov 25 09:31:34 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 25 09:31:34 localhost kernel: ftrace: allocating 49313 entries in 193 pages
Nov 25 09:31:34 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 25 09:31:34 localhost kernel: Dynamic Preempt: voluntary
Nov 25 09:31:34 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 25 09:31:34 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 25 09:31:34 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 25 09:31:34 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 25 09:31:34 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 25 09:31:34 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 25 09:31:34 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 25 09:31:34 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 25 09:31:34 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 09:31:34 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 09:31:34 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 25 09:31:34 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 25 09:31:34 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 25 09:31:34 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 25 09:31:34 localhost kernel: Console: colour VGA+ 80x25
Nov 25 09:31:34 localhost kernel: printk: console [ttyS0] enabled
Nov 25 09:31:34 localhost kernel: ACPI: Core revision 20230331
Nov 25 09:31:34 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 25 09:31:34 localhost kernel: x2apic enabled
Nov 25 09:31:34 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 25 09:31:34 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 25 09:31:34 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 25 09:31:34 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 25 09:31:34 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 25 09:31:34 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 25 09:31:34 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 25 09:31:34 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 25 09:31:34 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 25 09:31:34 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 25 09:31:34 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 25 09:31:34 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 25 09:31:34 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 25 09:31:34 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 25 09:31:34 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 25 09:31:34 localhost kernel: x86/bugs: return thunk changed
Nov 25 09:31:34 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 25 09:31:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 25 09:31:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 25 09:31:34 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 25 09:31:34 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 25 09:31:34 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 25 09:31:34 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 25 09:31:34 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 25 09:31:34 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 25 09:31:34 localhost kernel: landlock: Up and running.
Nov 25 09:31:34 localhost kernel: Yama: becoming mindful.
Nov 25 09:31:34 localhost kernel: SELinux:  Initializing.
Nov 25 09:31:34 localhost kernel: LSM support for eBPF active
Nov 25 09:31:34 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 09:31:34 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 25 09:31:34 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 25 09:31:34 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 25 09:31:34 localhost kernel: ... version:                0
Nov 25 09:31:34 localhost kernel: ... bit width:              48
Nov 25 09:31:34 localhost kernel: ... generic registers:      6
Nov 25 09:31:34 localhost kernel: ... value mask:             0000ffffffffffff
Nov 25 09:31:34 localhost kernel: ... max period:             00007fffffffffff
Nov 25 09:31:34 localhost kernel: ... fixed-purpose events:   0
Nov 25 09:31:34 localhost kernel: ... event mask:             000000000000003f
Nov 25 09:31:34 localhost kernel: signal: max sigframe size: 1776
Nov 25 09:31:34 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 25 09:31:34 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 25 09:31:34 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 25 09:31:34 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 25 09:31:34 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 25 09:31:34 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 25 09:31:34 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 25 09:31:34 localhost kernel: node 0 deferred pages initialised in 9ms
Nov 25 09:31:34 localhost kernel: Memory: 7776576K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 605572K reserved, 0K cma-reserved)
Nov 25 09:31:34 localhost kernel: devtmpfs: initialized
Nov 25 09:31:34 localhost kernel: x86/mm: Memory block size: 128MB
Nov 25 09:31:34 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 25 09:31:34 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 25 09:31:34 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 25 09:31:34 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 25 09:31:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 25 09:31:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 25 09:31:34 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 25 09:31:34 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 25 09:31:34 localhost kernel: audit: type=2000 audit(1764063093.206:1): state=initialized audit_enabled=0 res=1
Nov 25 09:31:34 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 25 09:31:34 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 25 09:31:34 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 25 09:31:34 localhost kernel: cpuidle: using governor menu
Nov 25 09:31:34 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 25 09:31:34 localhost kernel: PCI: Using configuration type 1 for base access
Nov 25 09:31:34 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 25 09:31:34 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 25 09:31:34 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 25 09:31:34 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 25 09:31:34 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 25 09:31:34 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 25 09:31:34 localhost kernel: Demotion targets for Node 0: null
Nov 25 09:31:34 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 25 09:31:34 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 25 09:31:34 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 25 09:31:34 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 25 09:31:34 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 25 09:31:34 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 25 09:31:34 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 25 09:31:34 localhost kernel: ACPI: Interpreter enabled
Nov 25 09:31:34 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 25 09:31:34 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 25 09:31:34 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 25 09:31:34 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 25 09:31:34 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 25 09:31:34 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 25 09:31:34 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [3] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [4] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [5] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [6] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [7] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [8] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [9] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [10] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [11] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [12] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [13] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [14] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [15] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [16] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [17] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [18] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [19] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [20] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [21] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [22] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [23] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [24] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [25] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [26] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [27] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [28] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [29] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [30] registered
Nov 25 09:31:34 localhost kernel: acpiphp: Slot [31] registered
Nov 25 09:31:34 localhost kernel: PCI host bridge to bus 0000:00
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 25 09:31:34 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 25 09:31:34 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 25 09:31:34 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 25 09:31:34 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 25 09:31:34 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 25 09:31:34 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 25 09:31:34 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 25 09:31:34 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 25 09:31:34 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 25 09:31:34 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 25 09:31:34 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 25 09:31:34 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 25 09:31:34 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 25 09:31:34 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 25 09:31:34 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 25 09:31:34 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 25 09:31:34 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 25 09:31:34 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 25 09:31:34 localhost kernel: iommu: Default domain type: Translated
Nov 25 09:31:34 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 25 09:31:34 localhost kernel: SCSI subsystem initialized
Nov 25 09:31:34 localhost kernel: ACPI: bus type USB registered
Nov 25 09:31:34 localhost kernel: usbcore: registered new interface driver usbfs
Nov 25 09:31:34 localhost kernel: usbcore: registered new interface driver hub
Nov 25 09:31:34 localhost kernel: usbcore: registered new device driver usb
Nov 25 09:31:34 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 25 09:31:34 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 25 09:31:34 localhost kernel: PTP clock support registered
Nov 25 09:31:34 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 25 09:31:34 localhost kernel: NetLabel: Initializing
Nov 25 09:31:34 localhost kernel: NetLabel:  domain hash size = 128
Nov 25 09:31:34 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 25 09:31:34 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 25 09:31:34 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 25 09:31:34 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 25 09:31:34 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 25 09:31:34 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 25 09:31:34 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 25 09:31:34 localhost kernel: vgaarb: loaded
Nov 25 09:31:34 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 25 09:31:34 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 25 09:31:34 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 25 09:31:34 localhost kernel: pnp: PnP ACPI init
Nov 25 09:31:34 localhost kernel: pnp 00:03: [dma 2]
Nov 25 09:31:34 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 25 09:31:34 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 25 09:31:34 localhost kernel: NET: Registered PF_INET protocol family
Nov 25 09:31:34 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 25 09:31:34 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 25 09:31:34 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 25 09:31:34 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 25 09:31:34 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 25 09:31:34 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 25 09:31:34 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 25 09:31:34 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 09:31:34 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 25 09:31:34 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 25 09:31:34 localhost kernel: NET: Registered PF_XDP protocol family
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 25 09:31:34 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 25 09:31:34 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 25 09:31:34 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 25 09:31:34 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 76969 usecs
Nov 25 09:31:34 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 25 09:31:34 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 25 09:31:34 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 25 09:31:34 localhost kernel: ACPI: bus type thunderbolt registered
Nov 25 09:31:34 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 25 09:31:34 localhost kernel: Initialise system trusted keyrings
Nov 25 09:31:34 localhost kernel: Key type blacklist registered
Nov 25 09:31:34 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 25 09:31:34 localhost kernel: zbud: loaded
Nov 25 09:31:34 localhost kernel: integrity: Platform Keyring initialized
Nov 25 09:31:34 localhost kernel: integrity: Machine keyring initialized
Nov 25 09:31:34 localhost kernel: Freeing initrd memory: 75160K
Nov 25 09:31:34 localhost kernel: NET: Registered PF_ALG protocol family
Nov 25 09:31:34 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 25 09:31:34 localhost kernel: Key type asymmetric registered
Nov 25 09:31:34 localhost kernel: Asymmetric key parser 'x509' registered
Nov 25 09:31:34 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 25 09:31:34 localhost kernel: io scheduler mq-deadline registered
Nov 25 09:31:34 localhost kernel: io scheduler kyber registered
Nov 25 09:31:34 localhost kernel: io scheduler bfq registered
Nov 25 09:31:34 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 25 09:31:34 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 25 09:31:34 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 25 09:31:34 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 25 09:31:34 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 25 09:31:34 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 25 09:31:34 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 25 09:31:34 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 25 09:31:34 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 25 09:31:34 localhost kernel: Non-volatile memory driver v1.3
Nov 25 09:31:34 localhost kernel: rdac: device handler registered
Nov 25 09:31:34 localhost kernel: hp_sw: device handler registered
Nov 25 09:31:34 localhost kernel: emc: device handler registered
Nov 25 09:31:34 localhost kernel: alua: device handler registered
Nov 25 09:31:34 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 25 09:31:34 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 25 09:31:34 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 25 09:31:34 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 25 09:31:34 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 25 09:31:34 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 25 09:31:34 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 25 09:31:34 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 25 09:31:34 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 25 09:31:34 localhost kernel: hub 1-0:1.0: USB hub found
Nov 25 09:31:34 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 25 09:31:34 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 25 09:31:34 localhost kernel: usbserial: USB Serial support registered for generic
Nov 25 09:31:34 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 25 09:31:34 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 25 09:31:34 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 25 09:31:34 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 25 09:31:34 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 25 09:31:34 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 25 09:31:34 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 25 09:31:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 25 09:31:34 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-25T09:31:33 UTC (1764063093)
Nov 25 09:31:34 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 25 09:31:34 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 25 09:31:34 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 25 09:31:34 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 25 09:31:34 localhost kernel: usbcore: registered new interface driver usbhid
Nov 25 09:31:34 localhost kernel: usbhid: USB HID core driver
Nov 25 09:31:34 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 25 09:31:34 localhost kernel: Initializing XFRM netlink socket
Nov 25 09:31:34 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 25 09:31:34 localhost kernel: Segment Routing with IPv6
Nov 25 09:31:34 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 25 09:31:34 localhost kernel: mpls_gso: MPLS GSO support
Nov 25 09:31:34 localhost kernel: IPI shorthand broadcast: enabled
Nov 25 09:31:34 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 25 09:31:34 localhost kernel: AES CTR mode by8 optimization enabled
Nov 25 09:31:34 localhost kernel: sched_clock: Marking stable (1175003280, 150680610)->(1402010490, -76326600)
Nov 25 09:31:34 localhost kernel: registered taskstats version 1
Nov 25 09:31:34 localhost kernel: Loading compiled-in X.509 certificates
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 25 09:31:34 localhost kernel: Demotion targets for Node 0: null
Nov 25 09:31:34 localhost kernel: page_owner is disabled
Nov 25 09:31:34 localhost kernel: Key type .fscrypt registered
Nov 25 09:31:34 localhost kernel: Key type fscrypt-provisioning registered
Nov 25 09:31:34 localhost kernel: Key type big_key registered
Nov 25 09:31:34 localhost kernel: Key type encrypted registered
Nov 25 09:31:34 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 25 09:31:34 localhost kernel: Loading compiled-in module X.509 certificates
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 25 09:31:34 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 25 09:31:34 localhost kernel: ima: No architecture policies found
Nov 25 09:31:34 localhost kernel: evm: Initialising EVM extended attributes:
Nov 25 09:31:34 localhost kernel: evm: security.selinux
Nov 25 09:31:34 localhost kernel: evm: security.SMACK64 (disabled)
Nov 25 09:31:34 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 25 09:31:34 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 25 09:31:34 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 25 09:31:34 localhost kernel: evm: security.apparmor (disabled)
Nov 25 09:31:34 localhost kernel: evm: security.ima
Nov 25 09:31:34 localhost kernel: evm: security.capability
Nov 25 09:31:34 localhost kernel: evm: HMAC attrs: 0x1
Nov 25 09:31:34 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 25 09:31:34 localhost kernel: Running certificate verification RSA selftest
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 25 09:31:34 localhost kernel: Running certificate verification ECDSA selftest
Nov 25 09:31:34 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 25 09:31:34 localhost kernel: clk: Disabling unused clocks
Nov 25 09:31:34 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 25 09:31:34 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 25 09:31:34 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 25 09:31:34 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 25 09:31:34 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 25 09:31:34 localhost kernel: Run /init as init process
Nov 25 09:31:34 localhost kernel:   with arguments:
Nov 25 09:31:34 localhost kernel:     /init
Nov 25 09:31:34 localhost kernel:   with environment:
Nov 25 09:31:34 localhost kernel:     HOME=/
Nov 25 09:31:34 localhost kernel:     TERM=linux
Nov 25 09:31:34 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64
Nov 25 09:31:34 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 09:31:34 localhost systemd[1]: Detected virtualization kvm.
Nov 25 09:31:34 localhost systemd[1]: Detected architecture x86-64.
Nov 25 09:31:34 localhost systemd[1]: Running in initrd.
Nov 25 09:31:34 localhost systemd[1]: No hostname configured, using default hostname.
Nov 25 09:31:34 localhost systemd[1]: Hostname set to <localhost>.
Nov 25 09:31:34 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 25 09:31:34 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 25 09:31:34 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 25 09:31:34 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 25 09:31:34 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 25 09:31:34 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 25 09:31:34 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 25 09:31:34 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 25 09:31:34 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 25 09:31:34 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 09:31:34 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 09:31:34 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 25 09:31:34 localhost systemd[1]: Reached target Local File Systems.
Nov 25 09:31:34 localhost systemd[1]: Reached target Path Units.
Nov 25 09:31:34 localhost systemd[1]: Reached target Slice Units.
Nov 25 09:31:34 localhost systemd[1]: Reached target Swaps.
Nov 25 09:31:34 localhost systemd[1]: Reached target Timer Units.
Nov 25 09:31:34 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 09:31:34 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 25 09:31:34 localhost systemd[1]: Listening on Journal Socket.
Nov 25 09:31:34 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 09:31:34 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 09:31:34 localhost systemd[1]: Reached target Socket Units.
Nov 25 09:31:34 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 09:31:34 localhost systemd[1]: Starting Journal Service...
Nov 25 09:31:34 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 09:31:34 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:31:34 localhost systemd[1]: Starting Create System Users...
Nov 25 09:31:34 localhost systemd[1]: Starting Setup Virtual Console...
Nov 25 09:31:34 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 09:31:34 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:31:34 localhost systemd[1]: Finished Create System Users.
Nov 25 09:31:34 localhost systemd-journald[301]: Journal started
Nov 25 09:31:34 localhost systemd-journald[301]: Runtime Journal (/run/log/journal/2c41005d422044aaa37c4fdfb3e65238) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:31:34 localhost systemd-sysusers[306]: Creating group 'users' with GID 100.
Nov 25 09:31:34 localhost systemd-sysusers[306]: Creating group 'dbus' with GID 81.
Nov 25 09:31:34 localhost systemd-sysusers[306]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 25 09:31:34 localhost systemd[1]: Started Journal Service.
Nov 25 09:31:34 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 09:31:34 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 09:31:34 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 09:31:34 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 09:31:34 localhost systemd[1]: Finished Setup Virtual Console.
Nov 25 09:31:34 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 25 09:31:34 localhost systemd[1]: Starting dracut cmdline hook...
Nov 25 09:31:34 localhost dracut-cmdline[322]: dracut-9 dracut-057-102.git20250818.el9
Nov 25 09:31:34 localhost dracut-cmdline[322]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 25 09:31:34 localhost systemd[1]: Finished dracut cmdline hook.
Nov 25 09:31:34 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 25 09:31:34 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 25 09:31:34 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 25 09:31:34 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 25 09:31:34 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 25 09:31:34 localhost kernel: RPC: Registered udp transport module.
Nov 25 09:31:34 localhost kernel: RPC: Registered tcp transport module.
Nov 25 09:31:34 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 25 09:31:34 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 25 09:31:34 localhost rpc.statd[438]: Version 2.5.4 starting
Nov 25 09:31:34 localhost rpc.statd[438]: Initializing NSM state
Nov 25 09:31:34 localhost rpc.idmapd[443]: Setting log level to 0
Nov 25 09:31:34 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 25 09:31:34 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 09:31:34 localhost systemd-udevd[456]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 09:31:34 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 09:31:34 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 25 09:31:34 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 25 09:31:34 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 09:31:34 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 25 09:31:34 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:31:34 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 09:31:34 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:31:34 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:31:34 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 09:31:34 localhost systemd[1]: Reached target Network.
Nov 25 09:31:34 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 25 09:31:34 localhost systemd[1]: Starting dracut initqueue hook...
Nov 25 09:31:34 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 25 09:31:34 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 25 09:31:34 localhost kernel:  vda: vda1
Nov 25 09:31:34 localhost kernel: libata version 3.00 loaded.
Nov 25 09:31:34 localhost systemd-udevd[494]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:31:34 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 25 09:31:34 localhost kernel: scsi host0: ata_piix
Nov 25 09:31:34 localhost kernel: scsi host1: ata_piix
Nov 25 09:31:34 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 25 09:31:34 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 25 09:31:35 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 25 09:31:35 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 25 09:31:35 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 09:31:35 localhost systemd[1]: Reached target Initrd Root Device.
Nov 25 09:31:35 localhost systemd[1]: Reached target System Initialization.
Nov 25 09:31:35 localhost systemd[1]: Reached target Basic System.
Nov 25 09:31:35 localhost kernel: ata1: found unknown device (class 0)
Nov 25 09:31:35 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 25 09:31:35 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 25 09:31:35 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 25 09:31:35 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 25 09:31:35 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 25 09:31:35 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 25 09:31:35 localhost systemd[1]: Finished dracut initqueue hook.
Nov 25 09:31:35 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 09:31:35 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 25 09:31:35 localhost systemd[1]: Reached target Remote File Systems.
Nov 25 09:31:35 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 25 09:31:35 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 25 09:31:35 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 25 09:31:35 localhost systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Nov 25 09:31:35 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 25 09:31:35 localhost systemd[1]: Mounting /sysroot...
Nov 25 09:31:35 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 25 09:31:35 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 25 09:31:36 localhost kernel: XFS (vda1): Ending clean mount
Nov 25 09:31:36 localhost systemd[1]: Mounted /sysroot.
Nov 25 09:31:36 localhost systemd[1]: Reached target Initrd Root File System.
Nov 25 09:31:36 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 25 09:31:36 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 25 09:31:36 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 25 09:31:36 localhost systemd[1]: Reached target Initrd File Systems.
Nov 25 09:31:36 localhost systemd[1]: Reached target Initrd Default Target.
Nov 25 09:31:36 localhost systemd[1]: Starting dracut mount hook...
Nov 25 09:31:36 localhost systemd[1]: Finished dracut mount hook.
Nov 25 09:31:36 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 25 09:31:37 localhost rpc.idmapd[443]: exiting on signal 15
Nov 25 09:31:37 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 25 09:31:37 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 25 09:31:37 localhost systemd[1]: Stopped target Network.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Timer Units.
Nov 25 09:31:37 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 25 09:31:37 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Basic System.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Path Units.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Remote File Systems.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Slice Units.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Socket Units.
Nov 25 09:31:37 localhost systemd[1]: Stopped target System Initialization.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Local File Systems.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Swaps.
Nov 25 09:31:37 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped dracut mount hook.
Nov 25 09:31:37 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 25 09:31:37 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 25 09:31:37 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 25 09:31:37 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 25 09:31:37 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 25 09:31:37 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 25 09:31:37 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 25 09:31:37 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 25 09:31:37 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 25 09:31:37 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 25 09:31:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 25 09:31:37 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 25 09:31:37 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Closed udev Control Socket.
Nov 25 09:31:37 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Closed udev Kernel Socket.
Nov 25 09:31:37 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 25 09:31:37 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 25 09:31:37 localhost systemd[1]: Starting Cleanup udev Database...
Nov 25 09:31:37 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 25 09:31:37 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 25 09:31:37 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Stopped Create System Users.
Nov 25 09:31:37 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 25 09:31:37 localhost systemd[1]: Finished Cleanup udev Database.
Nov 25 09:31:37 localhost systemd[1]: Reached target Switch Root.
Nov 25 09:31:37 localhost systemd[1]: Starting Switch Root...
Nov 25 09:31:37 localhost systemd[1]: Switching root.
Nov 25 09:31:37 localhost systemd-journald[301]: Journal stopped
Nov 25 09:31:39 localhost systemd-journald[301]: Received SIGTERM from PID 1 (systemd).
Nov 25 09:31:39 localhost kernel: audit: type=1404 audit(1764063097.825:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 25 09:31:39 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:31:39 localhost kernel: SELinux:  policy capability open_perms=1
Nov 25 09:31:39 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:31:39 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:31:39 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:31:39 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:31:39 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:31:39 localhost kernel: audit: type=1403 audit(1764063098.076:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 25 09:31:39 localhost systemd[1]: Successfully loaded SELinux policy in 255.351ms.
Nov 25 09:31:39 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.071ms.
Nov 25 09:31:39 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 25 09:31:39 localhost systemd[1]: Detected virtualization kvm.
Nov 25 09:31:39 localhost systemd[1]: Detected architecture x86-64.
Nov 25 09:31:39 localhost systemd-rc-local-generator[632]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:31:39 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 25 09:31:39 localhost systemd[1]: Stopped Switch Root.
Nov 25 09:31:39 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 25 09:31:39 localhost systemd[1]: Created slice Slice /system/getty.
Nov 25 09:31:39 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 25 09:31:39 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 25 09:31:39 localhost systemd[1]: Created slice User and Session Slice.
Nov 25 09:31:39 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 25 09:31:39 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 25 09:31:39 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 25 09:31:39 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 25 09:31:39 localhost systemd[1]: Stopped target Switch Root.
Nov 25 09:31:39 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 25 09:31:39 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 25 09:31:39 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 25 09:31:39 localhost systemd[1]: Reached target Path Units.
Nov 25 09:31:39 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 25 09:31:39 localhost systemd[1]: Reached target Slice Units.
Nov 25 09:31:39 localhost systemd[1]: Reached target Swaps.
Nov 25 09:31:39 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 25 09:31:39 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 25 09:31:39 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 25 09:31:39 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 25 09:31:39 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 25 09:31:39 localhost systemd[1]: Listening on udev Control Socket.
Nov 25 09:31:39 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 25 09:31:39 localhost systemd[1]: Mounting Huge Pages File System...
Nov 25 09:31:39 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 25 09:31:39 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 25 09:31:39 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 25 09:31:39 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 09:31:39 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 25 09:31:39 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:31:39 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 25 09:31:39 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 25 09:31:39 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 25 09:31:39 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 25 09:31:39 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 25 09:31:39 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 25 09:31:39 localhost systemd[1]: Stopped Journal Service.
Nov 25 09:31:39 localhost systemd[1]: Starting Journal Service...
Nov 25 09:31:39 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 25 09:31:39 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 25 09:31:39 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:31:39 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 25 09:31:39 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 25 09:31:39 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 25 09:31:39 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 25 09:31:39 localhost kernel: fuse: init (API version 7.37)
Nov 25 09:31:39 localhost systemd-journald[674]: Journal started
Nov 25 09:31:39 localhost systemd-journald[674]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:31:39 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 25 09:31:39 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 25 09:31:39 localhost systemd[1]: Mounted Huge Pages File System.
Nov 25 09:31:39 localhost systemd[1]: Started Journal Service.
Nov 25 09:31:39 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 25 09:31:39 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 25 09:31:39 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 25 09:31:39 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 25 09:31:39 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 25 09:31:39 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:31:39 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:31:39 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 25 09:31:39 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 25 09:31:39 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 25 09:31:39 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 25 09:31:39 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 25 09:31:39 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 25 09:31:39 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 25 09:31:39 localhost kernel: ACPI: bus type drm_connector registered
Nov 25 09:31:39 localhost systemd[1]: Mounting FUSE Control File System...
Nov 25 09:31:39 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 09:31:39 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 25 09:31:39 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 25 09:31:39 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 25 09:31:39 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 25 09:31:40 localhost systemd[1]: Starting Create System Users...
Nov 25 09:31:40 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 25 09:31:40 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 25 09:31:40 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 25 09:31:40 localhost systemd-journald[674]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 25 09:31:40 localhost systemd-journald[674]: Received client request to flush runtime journal.
Nov 25 09:31:40 localhost systemd[1]: Mounted FUSE Control File System.
Nov 25 09:31:40 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 25 09:31:40 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 25 09:31:40 localhost systemd[1]: Finished Create System Users.
Nov 25 09:31:40 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 25 09:31:40 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 25 09:31:40 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 25 09:31:40 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 25 09:31:40 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 25 09:31:40 localhost systemd[1]: Reached target Local File Systems.
Nov 25 09:31:40 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 25 09:31:40 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 25 09:31:40 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 25 09:31:40 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 25 09:31:40 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 25 09:31:40 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 25 09:31:40 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 25 09:31:40 localhost bootctl[693]: Couldn't find EFI system partition, skipping.
Nov 25 09:31:40 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 25 09:31:40 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 25 09:31:40 localhost systemd[1]: Starting Security Auditing Service...
Nov 25 09:31:40 localhost systemd[1]: Starting RPC Bind...
Nov 25 09:31:40 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 25 09:31:40 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 25 09:31:40 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 25 09:31:40 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 25 09:31:40 localhost systemd[1]: Started RPC Bind.
Nov 25 09:31:40 localhost augenrules[705]: /sbin/augenrules: No change
Nov 25 09:31:40 localhost augenrules[720]: No rules
Nov 25 09:31:40 localhost augenrules[720]: enabled 1
Nov 25 09:31:40 localhost augenrules[720]: failure 1
Nov 25 09:31:40 localhost augenrules[720]: pid 700
Nov 25 09:31:40 localhost augenrules[720]: rate_limit 0
Nov 25 09:31:40 localhost augenrules[720]: backlog_limit 8192
Nov 25 09:31:40 localhost augenrules[720]: lost 0
Nov 25 09:31:40 localhost augenrules[720]: backlog 0
Nov 25 09:31:40 localhost augenrules[720]: backlog_wait_time 60000
Nov 25 09:31:40 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 25 09:31:40 localhost augenrules[720]: enabled 1
Nov 25 09:31:40 localhost augenrules[720]: failure 1
Nov 25 09:31:40 localhost augenrules[720]: pid 700
Nov 25 09:31:40 localhost augenrules[720]: rate_limit 0
Nov 25 09:31:40 localhost augenrules[720]: backlog_limit 8192
Nov 25 09:31:40 localhost augenrules[720]: lost 0
Nov 25 09:31:40 localhost augenrules[720]: backlog 4
Nov 25 09:31:40 localhost augenrules[720]: backlog_wait_time 60000
Nov 25 09:31:40 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 25 09:31:40 localhost augenrules[720]: enabled 1
Nov 25 09:31:40 localhost augenrules[720]: failure 1
Nov 25 09:31:40 localhost augenrules[720]: pid 700
Nov 25 09:31:40 localhost augenrules[720]: rate_limit 0
Nov 25 09:31:40 localhost augenrules[720]: backlog_limit 8192
Nov 25 09:31:40 localhost augenrules[720]: lost 0
Nov 25 09:31:40 localhost augenrules[720]: backlog 4
Nov 25 09:31:40 localhost augenrules[720]: backlog_wait_time 60000
Nov 25 09:31:40 localhost augenrules[720]: backlog_wait_time_actual 0
Nov 25 09:31:40 localhost systemd[1]: Started Security Auditing Service.
Nov 25 09:31:40 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 25 09:31:41 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 25 09:31:41 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 25 09:31:41 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 25 09:31:41 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Nov 25 09:31:42 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 25 09:31:42 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 25 09:31:42 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 25 09:31:42 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 25 09:31:42 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 25 09:31:42 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 25 09:31:42 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 25 09:31:42 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 25 09:31:42 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 25 09:31:42 localhost systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:31:42 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 25 09:31:42 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 25 09:31:42 localhost kernel: Console: switching to colour dummy device 80x25
Nov 25 09:31:42 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 25 09:31:42 localhost kernel: [drm] features: -context_init
Nov 25 09:31:42 localhost kernel: [drm] number of scanouts: 1
Nov 25 09:31:42 localhost kernel: [drm] number of cap sets: 0
Nov 25 09:31:42 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 25 09:31:42 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 25 09:31:42 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 25 09:31:42 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 25 09:31:42 localhost kernel: kvm_amd: TSC scaling supported
Nov 25 09:31:42 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 25 09:31:42 localhost kernel: kvm_amd: Nested Paging enabled
Nov 25 09:31:42 localhost kernel: kvm_amd: LBR virtualization supported
Nov 25 09:31:42 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 25 09:31:42 localhost systemd[1]: Starting Update is Completed...
Nov 25 09:31:42 localhost systemd[1]: Finished Update is Completed.
Nov 25 09:31:42 localhost systemd[1]: Reached target System Initialization.
Nov 25 09:31:42 localhost systemd[1]: Started dnf makecache --timer.
Nov 25 09:31:42 localhost systemd[1]: Started Daily rotation of log files.
Nov 25 09:31:42 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 25 09:31:42 localhost systemd[1]: Reached target Timer Units.
Nov 25 09:31:42 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 25 09:31:42 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 25 09:31:42 localhost systemd[1]: Reached target Socket Units.
Nov 25 09:31:42 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 25 09:31:42 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:31:42 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 25 09:31:42 localhost systemd[1]: Reached target Basic System.
Nov 25 09:31:42 localhost dbus-broker-lau[812]: Ready
Nov 25 09:31:42 localhost systemd[1]: Starting NTP client/server...
Nov 25 09:31:42 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 25 09:31:42 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 25 09:31:42 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 25 09:31:42 localhost systemd[1]: Started irqbalance daemon.
Nov 25 09:31:42 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 25 09:31:42 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:31:42 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:31:42 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 09:31:42 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 25 09:31:42 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 25 09:31:42 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 25 09:31:42 localhost systemd[1]: Starting User Login Management...
Nov 25 09:31:42 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 25 09:31:43 localhost systemd-logind[822]: New seat seat0.
Nov 25 09:31:43 localhost systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 09:31:43 localhost systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 09:31:43 localhost chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 09:31:43 localhost systemd[1]: Started User Login Management.
Nov 25 09:31:43 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 25 09:31:43 localhost chronyd[831]: Loaded 0 symmetric keys
Nov 25 09:31:43 localhost chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 25 09:31:43 localhost chronyd[831]: Loaded seccomp filter (level 2)
Nov 25 09:31:43 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 25 09:31:43 localhost systemd[1]: Started NTP client/server.
Nov 25 09:31:43 localhost iptables.init[817]: iptables: Applying firewall rules: [  OK  ]
Nov 25 09:31:43 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 25 09:31:44 localhost cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 25 Nov 2025 09:31:44 +0000. Up 12.25 seconds.
Nov 25 09:31:44 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 25 09:31:44 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 25 09:31:44 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp94jprly2.mount: Deactivated successfully.
Nov 25 09:31:45 localhost systemd[1]: Starting Hostname Service...
Nov 25 09:31:45 localhost systemd[1]: Started Hostname Service.
Nov 25 09:31:45 np0005534753.novalocal systemd-hostnamed[854]: Hostname set to <np0005534753.novalocal> (static)
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Reached target Preparation for Network.
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Starting Network Manager...
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.4862] NetworkManager (version 1.54.1-1.el9) is starting... (boot:e06b8e8c-0c4c-4141-b318-1ef0fbbec151)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.4869] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5115] manager[0x5608630fa080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5180] hostname: hostname: using hostnamed
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5180] hostname: static hostname changed from (none) to "np0005534753.novalocal"
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5190] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5318] manager[0x5608630fa080]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5319] manager[0x5608630fa080]: rfkill: WWAN hardware radio set enabled
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5493] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5494] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5496] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5497] manager: Networking is enabled by state file
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5501] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5534] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5597] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5627] dhcp: init: Using DHCP client 'internal'
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5631] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5648] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5677] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5687] device (lo): Activation: starting connection 'lo' (14c424d9-56c8-4f39-a02e-7c90be18328a)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5697] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5701] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Started Network Manager.
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5734] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5738] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5741] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5743] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5744] device (eth0): carrier: link connected
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5748] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Reached target Network.
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5755] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5766] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5771] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5772] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5775] manager: NetworkManager state is now CONNECTING
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5777] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5785] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5789] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5859] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5871] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.5897] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Reached target NFS client services.
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6239] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6242] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6250] device (lo): Activation: successful, device activated.
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: Reached target Remote File Systems.
Nov 25 09:31:45 np0005534753.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6269] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6272] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6275] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6280] device (eth0): Activation: successful, device activated.
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6287] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:31:45 np0005534753.novalocal NetworkManager[858]: <info>  [1764063105.6291] manager: startup complete
Nov 25 09:31:46 np0005534753.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:31:46 np0005534753.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 25 Nov 2025 09:31:46 +0000. Up 14.07 seconds.
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |  eth0  | True |        38.102.83.147         | 255.255.255.0 | global | fa:16:3e:bf:ca:e7 |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |  eth0  | True | fe80::f816:3eff:febf:cae7/64 |       .       |  link  | fa:16:3e:bf:ca:e7 |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 25 09:31:46 np0005534753.novalocal cloud-init[927]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 25 09:31:50 np0005534753.novalocal chronyd[831]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Nov 25 09:31:50 np0005534753.novalocal chronyd[831]: System clock TAI offset set to 37 seconds
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: IRQ 25 affinity is now unmanaged
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: IRQ 31 affinity is now unmanaged
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: IRQ 28 affinity is now unmanaged
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: IRQ 32 affinity is now unmanaged
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: IRQ 30 affinity is now unmanaged
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 25 09:31:53 np0005534753.novalocal irqbalance[818]: IRQ 29 affinity is now unmanaged
Nov 25 09:31:55 np0005534753.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:32:01 np0005534753.novalocal useradd[993]: new group: name=cloud-user, GID=1001
Nov 25 09:32:01 np0005534753.novalocal useradd[993]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 25 09:32:01 np0005534753.novalocal useradd[993]: add 'cloud-user' to group 'adm'
Nov 25 09:32:01 np0005534753.novalocal useradd[993]: add 'cloud-user' to group 'systemd-journal'
Nov 25 09:32:01 np0005534753.novalocal useradd[993]: add 'cloud-user' to shadow group 'adm'
Nov 25 09:32:01 np0005534753.novalocal useradd[993]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Generating public/private rsa key pair.
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: The key fingerprint is:
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: SHA256:0G1LeoT3TaseSuPTnCmL3iS8AN65x3ykcuT9sLm0FJk root@np0005534753.novalocal
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: The key's randomart image is:
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: +---[RSA 3072]----+
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |                 |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |       . o       |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |      . o *   .  |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |       . * = o . |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |    .   S E . o  |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |   . o o.... .   |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |    . +=o+Booo   |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |      .oB@=O=.   |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |      .=+.X*o    |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: +----[SHA256]-----+
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Generating public/private ecdsa key pair.
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: The key fingerprint is:
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: SHA256:0ZD9JCuxnGcvpTSWxzeHA2cmtoKDuM7ZChzW6hisqwI root@np0005534753.novalocal
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: The key's randomart image is:
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: +---[ECDSA 256]---+
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |        .o       |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |        ooo = +  |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |     . o.=.O B . |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |   .. . B.X * = .|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |  o ..  SB B . + |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |Eo o.     o .    |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |o.+o o     .     |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |o+ .+ .          |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |B.. ..           |
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: +----[SHA256]-----+
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Generating public/private ed25519 key pair.
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: The key fingerprint is:
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: SHA256:ODU2AXRcCetMR9beVbQbtte3yN/SOs2mBQ5wXsWfrY8 root@np0005534753.novalocal
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: The key's randomart image is:
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: +--[ED25519 256]--+
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |     .oo+o+o   o=|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |       ..=. .  .+|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |        B o....=+|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |       B + +..o.B|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |      o S   o .++|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |       .    .oo.+|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |             o.B.|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |              E.B|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: |              oB.|
Nov 25 09:32:05 np0005534753.novalocal cloud-init[927]: +----[SHA256]-----+
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Reached target Network is Online.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Starting System Logging Service...
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 25 09:32:05 np0005534753.novalocal sm-notify[1009]: Version 2.5.4 starting
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Starting Permit User Sessions...
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 25 09:32:05 np0005534753.novalocal sshd[1011]: Server listening on 0.0.0.0 port 22.
Nov 25 09:32:05 np0005534753.novalocal sshd[1011]: Server listening on :: port 22.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Finished Permit User Sessions.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Started Command Scheduler.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Started Getty on tty1.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Reached target Login Prompts.
Nov 25 09:32:05 np0005534753.novalocal crond[1014]: (CRON) STARTUP (1.5.7)
Nov 25 09:32:05 np0005534753.novalocal crond[1014]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 25 09:32:05 np0005534753.novalocal crond[1014]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 86% if used.)
Nov 25 09:32:05 np0005534753.novalocal crond[1014]: (CRON) INFO (running with inotify support)
Nov 25 09:32:05 np0005534753.novalocal rsyslogd[1010]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1010" x-info="https://www.rsyslog.com"] start
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Started System Logging Service.
Nov 25 09:32:05 np0005534753.novalocal rsyslogd[1010]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Reached target Multi-User System.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 25 09:32:05 np0005534753.novalocal rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1060]: Unable to negotiate with 38.102.83.114 port 50568: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1072]: Unable to negotiate with 38.102.83.114 port 50586: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1076]: Unable to negotiate with 38.102.83.114 port 50596: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1048]: Connection closed by 38.102.83.114 port 50562 [preauth]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1085]: Unable to negotiate with 38.102.83.114 port 50628: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 25 09:32:05 np0005534753.novalocal cloud-init[1087]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 25 Nov 2025 09:32:05 +0000. Up 33.46 seconds.
Nov 25 09:32:05 np0005534753.novalocal kdumpctl[1022]: kdump: No kdump initial ramdisk found.
Nov 25 09:32:05 np0005534753.novalocal kdumpctl[1022]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1070]: Connection closed by 38.102.83.114 port 50572 [preauth]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1090]: Unable to negotiate with 38.102.83.114 port 50636: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1079]: Connection closed by 38.102.83.114 port 50612 [preauth]
Nov 25 09:32:05 np0005534753.novalocal sshd-session[1082]: Connection closed by 38.102.83.114 port 50616 [preauth]
Nov 25 09:32:05 np0005534753.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 25 09:32:06 np0005534753.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1250]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 25 Nov 2025 09:32:06 +0000. Up 33.92 seconds.
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1274]: #############################################################
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1275]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1281]: 256 SHA256:0ZD9JCuxnGcvpTSWxzeHA2cmtoKDuM7ZChzW6hisqwI root@np0005534753.novalocal (ECDSA)
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1288]: 256 SHA256:ODU2AXRcCetMR9beVbQbtte3yN/SOs2mBQ5wXsWfrY8 root@np0005534753.novalocal (ED25519)
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1290]: 3072 SHA256:0G1LeoT3TaseSuPTnCmL3iS8AN65x3ykcuT9sLm0FJk root@np0005534753.novalocal (RSA)
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1291]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1294]: #############################################################
Nov 25 09:32:06 np0005534753.novalocal cloud-init[1250]: Cloud-init v. 24.4-7.el9 finished at Tue, 25 Nov 2025 09:32:06 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 34.13 seconds
Nov 25 09:32:06 np0005534753.novalocal dracut[1305]: dracut-057-102.git20250818.el9
Nov 25 09:32:06 np0005534753.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 25 09:32:06 np0005534753.novalocal systemd[1]: Reached target Cloud-init target.
Nov 25 09:32:06 np0005534753.novalocal dracut[1307]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 09:32:07 np0005534753.novalocal dracut[1307]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: memstrack is not available
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: memstrack is not available
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 25 09:32:08 np0005534753.novalocal dracut[1307]: *** Including module: systemd ***
Nov 25 09:32:09 np0005534753.novalocal dracut[1307]: *** Including module: fips ***
Nov 25 09:32:09 np0005534753.novalocal dracut[1307]: *** Including module: systemd-initrd ***
Nov 25 09:32:09 np0005534753.novalocal dracut[1307]: *** Including module: i18n ***
Nov 25 09:32:09 np0005534753.novalocal dracut[1307]: *** Including module: drm ***
Nov 25 09:32:10 np0005534753.novalocal dracut[1307]: *** Including module: prefixdevname ***
Nov 25 09:32:10 np0005534753.novalocal dracut[1307]: *** Including module: kernel-modules ***
Nov 25 09:32:12 np0005534753.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]: *** Including module: kernel-modules-extra ***
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]: *** Including module: qemu ***
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]: *** Including module: fstab-sys ***
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]: *** Including module: rootfs-block ***
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]: *** Including module: terminfo ***
Nov 25 09:32:13 np0005534753.novalocal dracut[1307]: *** Including module: udev-rules ***
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: Skipping udev rule: 91-permissions.rules
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: *** Including module: virtiofs ***
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: *** Including module: dracut-systemd ***
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: *** Including module: usrmount ***
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: *** Including module: base ***
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: *** Including module: fs-lib ***
Nov 25 09:32:14 np0005534753.novalocal dracut[1307]: *** Including module: kdumpbase ***
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:   microcode_ctl module: mangling fw_dir
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 25 09:32:15 np0005534753.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]: *** Including module: openssl ***
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]: *** Including module: shutdown ***
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]: *** Including module: squash ***
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]: *** Including modules done ***
Nov 25 09:32:15 np0005534753.novalocal dracut[1307]: *** Installing kernel module dependencies ***
Nov 25 09:32:17 np0005534753.novalocal dracut[1307]: *** Installing kernel module dependencies done ***
Nov 25 09:32:17 np0005534753.novalocal dracut[1307]: *** Resolving executable dependencies ***
Nov 25 09:32:19 np0005534753.novalocal dracut[1307]: *** Resolving executable dependencies done ***
Nov 25 09:32:19 np0005534753.novalocal dracut[1307]: *** Generating early-microcode cpio image ***
Nov 25 09:32:19 np0005534753.novalocal dracut[1307]: *** Store current command line parameters ***
Nov 25 09:32:19 np0005534753.novalocal dracut[1307]: Stored kernel commandline:
Nov 25 09:32:19 np0005534753.novalocal dracut[1307]: No dracut internal kernel commandline stored in the initramfs
Nov 25 09:32:19 np0005534753.novalocal dracut[1307]: *** Install squash loader ***
Nov 25 09:32:21 np0005534753.novalocal dracut[1307]: *** Squashing the files inside the initramfs ***
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: *** Squashing the files inside the initramfs done ***
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: *** Hardlinking files ***
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: Mode:           real
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: Files:          50
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: Linked:         0 files
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: Compared:       0 xattrs
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: Compared:       0 files
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: Saved:          0 B
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: Duration:       0.000675 seconds
Nov 25 09:32:22 np0005534753.novalocal dracut[1307]: *** Hardlinking files done ***
Nov 25 09:32:22 np0005534753.novalocal sshd-session[4186]: Accepted publickey for zuul from 38.102.83.114 port 47918 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 25 09:32:23 np0005534753.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 25 09:32:23 np0005534753.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 25 09:32:23 np0005534753.novalocal systemd-logind[822]: New session 1 of user zuul.
Nov 25 09:32:23 np0005534753.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 25 09:32:23 np0005534753.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Queued start job for default target Main User Target.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Created slice User Application Slice.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Reached target Paths.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Reached target Timers.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Starting D-Bus User Message Bus Socket...
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Starting Create User's Volatile Files and Directories...
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Finished Create User's Volatile Files and Directories.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Listening on D-Bus User Message Bus Socket.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Reached target Sockets.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Reached target Basic System.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Reached target Main User Target.
Nov 25 09:32:23 np0005534753.novalocal systemd[4191]: Startup finished in 245ms.
Nov 25 09:32:23 np0005534753.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 25 09:32:23 np0005534753.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 25 09:32:23 np0005534753.novalocal sshd-session[4186]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:32:24 np0005534753.novalocal dracut[1307]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 25 09:32:25 np0005534753.novalocal python3[4276]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:32:26 np0005534753.novalocal kdumpctl[1022]: kdump: kexec: loaded kdump kernel
Nov 25 09:32:26 np0005534753.novalocal kdumpctl[1022]: kdump: Starting kdump: [OK]
Nov 25 09:32:26 np0005534753.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 25 09:32:26 np0005534753.novalocal systemd[1]: Startup finished in 1.476s (kernel) + 3.905s (initrd) + 48.376s (userspace) = 53.758s.
Nov 25 09:32:27 np0005534753.novalocal python3[4415]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:32:34 np0005534753.novalocal python3[4473]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:32:34 np0005534753.novalocal python3[4513]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 25 09:32:36 np0005534753.novalocal python3[4539]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD6HBWnIJ/pq87zsr/WprhAv/Sp7hWrVWE8BjiXaNuBysqXGjLQsutt8pUloOZfdcog+s7HzNrxQU7a0A1PQrbJsP8FiVG3EIoDhfsYeNaK6/xCjVxKTHWXFgSREfUYMUGd1HJl/VQhG+XO1Si/SBkZC9c3pqIyDEUabu1KqdPwy83TAJy9PCNstxfPRhC8ZnI+kceqtT48j9LojGbP7JJ4cQMqrWNwpWhSpyBSUXTysvAx/WpeAEf5tha6xvWPD2+3ZY9lybH7AipsDgsHlEBOyAAq2m+bPnYxt/f3WxJHwr9GZYm+2OYfW659HJab8Vi7hjLCeSbMriKZC6tSgnNN2PSnX2vSGK+FgfTzhYfVFMX+FcWDhuP7xuLvrcHgxIpWT1XYBkbI+A5qsEKRnKF+WZo+HGwpMGbnw6fURyQTxkGaZ0G52RdUMXdLT2xKya41ls5FelN1u3oB30CfQXTYcjDGEj/bSA38ywRTFpSqs7R9CNuch2UHgJsRU2aVBWE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:37 np0005534753.novalocal python3[4563]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:37 np0005534753.novalocal python3[4662]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:32:37 np0005534753.novalocal python3[4733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063157.341325-207-184785273629676/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=9a80d6de998c46399c74a9f657ef84a6_id_rsa follow=False checksum=14a28d02ee5b5ab9e8854996ecf8b16ab80e5c3f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:38 np0005534753.novalocal python3[4856]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:32:39 np0005534753.novalocal python3[4927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063158.3923144-240-172400768174337/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=9a80d6de998c46399c74a9f657ef84a6_id_rsa.pub follow=False checksum=eebdb478f89befb033b4bbd59a51a0f20377cfc5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:40 np0005534753.novalocal python3[4975]: ansible-ping Invoked with data=pong
Nov 25 09:32:41 np0005534753.novalocal python3[4999]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:32:43 np0005534753.novalocal python3[5057]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 25 09:32:44 np0005534753.novalocal python3[5089]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:44 np0005534753.novalocal python3[5113]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:44 np0005534753.novalocal python3[5137]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:44 np0005534753.novalocal python3[5161]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:45 np0005534753.novalocal python3[5185]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:45 np0005534753.novalocal python3[5209]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:46 np0005534753.novalocal sudo[5233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjuolnosxiplbbdimgphtrvxohwceowe ; /usr/bin/python3'
Nov 25 09:32:46 np0005534753.novalocal sudo[5233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:32:47 np0005534753.novalocal python3[5235]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:47 np0005534753.novalocal sudo[5233]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:47 np0005534753.novalocal sudo[5311]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqekltorsrpzajlxveixxnobgdqjeyvs ; /usr/bin/python3'
Nov 25 09:32:47 np0005534753.novalocal sudo[5311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:32:47 np0005534753.novalocal python3[5313]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:32:47 np0005534753.novalocal sudo[5311]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:48 np0005534753.novalocal sudo[5384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raauzcgylaatfkphkchtquoibubiyqiy ; /usr/bin/python3'
Nov 25 09:32:48 np0005534753.novalocal sudo[5384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:32:48 np0005534753.novalocal python3[5386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063167.311019-21-98016315383962/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:48 np0005534753.novalocal sudo[5384]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:48 np0005534753.novalocal python3[5434]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:49 np0005534753.novalocal python3[5458]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:49 np0005534753.novalocal python3[5482]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:49 np0005534753.novalocal python3[5506]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:50 np0005534753.novalocal python3[5530]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:50 np0005534753.novalocal python3[5554]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:50 np0005534753.novalocal python3[5578]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:50 np0005534753.novalocal python3[5602]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:51 np0005534753.novalocal python3[5626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:51 np0005534753.novalocal python3[5650]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:51 np0005534753.novalocal python3[5674]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:52 np0005534753.novalocal python3[5698]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:52 np0005534753.novalocal python3[5722]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:52 np0005534753.novalocal python3[5746]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:53 np0005534753.novalocal python3[5770]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:53 np0005534753.novalocal python3[5794]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:53 np0005534753.novalocal python3[5818]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:54 np0005534753.novalocal python3[5842]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:54 np0005534753.novalocal python3[5866]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:54 np0005534753.novalocal python3[5890]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:55 np0005534753.novalocal python3[5914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:55 np0005534753.novalocal python3[5938]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:55 np0005534753.novalocal python3[5962]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:55 np0005534753.novalocal python3[5986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:56 np0005534753.novalocal python3[6010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:56 np0005534753.novalocal python3[6034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:32:58 np0005534753.novalocal sudo[6058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqpuuhznnungrpdtkpakqehurzasntvk ; /usr/bin/python3'
Nov 25 09:32:58 np0005534753.novalocal sudo[6058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:32:59 np0005534753.novalocal python3[6060]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 09:32:59 np0005534753.novalocal systemd[1]: Starting Time & Date Service...
Nov 25 09:32:59 np0005534753.novalocal systemd[1]: Started Time & Date Service.
Nov 25 09:32:59 np0005534753.novalocal systemd-timedated[6062]: Changed time zone to 'UTC' (UTC).
Nov 25 09:32:59 np0005534753.novalocal sudo[6058]: pam_unix(sudo:session): session closed for user root
Nov 25 09:32:59 np0005534753.novalocal sudo[6089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfanzezuhnlstttbtcpuyxluvczpkgzi ; /usr/bin/python3'
Nov 25 09:32:59 np0005534753.novalocal sudo[6089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:32:59 np0005534753.novalocal python3[6091]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:32:59 np0005534753.novalocal sudo[6089]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:00 np0005534753.novalocal python3[6167]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:00 np0005534753.novalocal python3[6238]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764063179.7176385-153-79391844647365/source _original_basename=tmpm1hn9rzp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:00 np0005534753.novalocal python3[6338]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:01 np0005534753.novalocal python3[6409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764063180.6532059-183-30449364016902/source _original_basename=tmp69a1fg4v follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:01 np0005534753.novalocal sudo[6509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkddscvkylvmnxrnwlyqaranxbhcmaym ; /usr/bin/python3'
Nov 25 09:33:01 np0005534753.novalocal sudo[6509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:02 np0005534753.novalocal python3[6511]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:02 np0005534753.novalocal sudo[6509]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:02 np0005534753.novalocal sudo[6582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvsnfnetvarcirnbrrozdkdciobnpanu ; /usr/bin/python3'
Nov 25 09:33:02 np0005534753.novalocal sudo[6582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:02 np0005534753.novalocal python3[6584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764063181.741574-231-172440518154523/source _original_basename=tmpt24yc2qj follow=False checksum=420e3a2f9d15a75f0a2d48d73e892351a51b8b4f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:02 np0005534753.novalocal sudo[6582]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:03 np0005534753.novalocal python3[6632]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:03 np0005534753.novalocal python3[6658]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:03 np0005534753.novalocal sudo[6736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gipybcytcpnikmbberevmcwlbgnkrojt ; /usr/bin/python3'
Nov 25 09:33:03 np0005534753.novalocal sudo[6736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:03 np0005534753.novalocal python3[6738]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:33:03 np0005534753.novalocal sudo[6736]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:03 np0005534753.novalocal sudo[6809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgfmracpvydhjhdxxdfkhyeafeohbugx ; /usr/bin/python3'
Nov 25 09:33:03 np0005534753.novalocal sudo[6809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:04 np0005534753.novalocal python3[6811]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063183.4686544-273-186409151134560/source _original_basename=tmpqauq2o96 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:04 np0005534753.novalocal sudo[6809]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:04 np0005534753.novalocal sudo[6860]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpkkizqzemsxvxdnmtfktkdtqjumsqyz ; /usr/bin/python3'
Nov 25 09:33:04 np0005534753.novalocal sudo[6860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:04 np0005534753.novalocal python3[6862]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-8568-339f-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:33:04 np0005534753.novalocal sudo[6860]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:05 np0005534753.novalocal python3[6890]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-8568-339f-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 25 09:33:06 np0005534753.novalocal python3[6919]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:29 np0005534753.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 09:33:34 np0005534753.novalocal sudo[6945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxbkvxmjtmytzjnwsoownvqzqkhzjzwy ; /usr/bin/python3'
Nov 25 09:33:34 np0005534753.novalocal sudo[6945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:33:34 np0005534753.novalocal python3[6947]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:33:34 np0005534753.novalocal sudo[6945]: pam_unix(sudo:session): session closed for user root
Nov 25 09:33:55 np0005534753.novalocal sshd-session[6948]: Invalid user solana from 80.94.92.182 port 59664
Nov 25 09:33:56 np0005534753.novalocal sshd-session[6948]: Connection closed by invalid user solana 80.94.92.182 port 59664 [preauth]
Nov 25 09:33:59 np0005534753.novalocal sshd-session[6950]: Connection closed by authenticating user root 171.244.51.45 port 53526 [preauth]
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 25 09:34:07 np0005534753.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 25 09:34:07 np0005534753.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3439] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:34:07 np0005534753.novalocal systemd-udevd[6952]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3597] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3618] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3620] device (eth1): carrier: link connected
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3622] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3626] policy: auto-activating connection 'Wired connection 1' (d4f00ec1-b080-3d7a-ab6b-d6cd50aae30b)
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3630] device (eth1): Activation: starting connection 'Wired connection 1' (d4f00ec1-b080-3d7a-ab6b-d6cd50aae30b)
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3631] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3632] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3636] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:34:07 np0005534753.novalocal NetworkManager[858]: <info>  [1764063247.3639] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:34:08 np0005534753.novalocal python3[6979]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-d194-dc36-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:15 np0005534753.novalocal sudo[7057]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahqauxksaifgnanbyjgpznobpfscpkey ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 09:34:15 np0005534753.novalocal sudo[7057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:15 np0005534753.novalocal python3[7059]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:34:15 np0005534753.novalocal sudo[7057]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:15 np0005534753.novalocal sudo[7130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kijnixlekwdbaxbxskbmjhdoqryrdbcr ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 09:34:15 np0005534753.novalocal sudo[7130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:15 np0005534753.novalocal python3[7132]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764063255.2776737-102-83093232900011/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=e09144fc9a087a0bdce684f354fd8ed3c2d53418 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:34:15 np0005534753.novalocal sudo[7130]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:16 np0005534753.novalocal sudo[7180]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csgypdkyzvbftwxxzqjnvqbnulqftbpc ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 09:34:16 np0005534753.novalocal sudo[7180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:34:16 np0005534753.novalocal python3[7182]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Stopping Network Manager...
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.7568] caught SIGTERM, shutting down normally.
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.7576] dhcp4 (eth0): canceled DHCP transaction
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.7576] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.7576] dhcp4 (eth0): state changed no lease
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.7578] manager: NetworkManager state is now CONNECTING
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.7645] dhcp4 (eth1): canceled DHCP transaction
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.7646] dhcp4 (eth1): state changed no lease
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[858]: <info>  [1764063256.8286] exiting (success)
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Stopped Network Manager.
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: NetworkManager.service: Consumed 1.139s CPU time, 9.9M memory peak.
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Starting Network Manager...
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.8795] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e06b8e8c-0c4c-4141-b318-1ef0fbbec151)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.8796] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.8847] manager[0x559ddd2c1070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Starting Hostname Service...
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Started Hostname Service.
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9587] hostname: hostname: using hostnamed
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9588] hostname: static hostname changed from (none) to "np0005534753.novalocal"
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9592] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9596] manager[0x559ddd2c1070]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9596] manager[0x559ddd2c1070]: rfkill: WWAN hardware radio set enabled
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9619] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9620] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9620] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9621] manager: Networking is enabled by state file
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9623] settings: Loaded settings plugin: keyfile (internal)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9627] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9651] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9662] dhcp: init: Using DHCP client 'internal'
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9665] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9670] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9674] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9681] device (lo): Activation: starting connection 'lo' (14c424d9-56c8-4f39-a02e-7c90be18328a)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9688] device (eth0): carrier: link connected
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9691] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9696] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9697] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9704] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9710] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9716] device (eth1): carrier: link connected
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9719] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9723] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (d4f00ec1-b080-3d7a-ab6b-d6cd50aae30b) (indicated)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9724] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9731] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9737] device (eth1): Activation: starting connection 'Wired connection 1' (d4f00ec1-b080-3d7a-ab6b-d6cd50aae30b)
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Started Network Manager.
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9746] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9751] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9753] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9755] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9758] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9760] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9762] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9764] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9767] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9773] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9776] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9783] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9785] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9808] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9811] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9817] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9823] device (lo): Activation: successful, device activated.
Nov 25 09:34:16 np0005534753.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 25 09:34:16 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063256.9868] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 09:34:16 np0005534753.novalocal sudo[7180]: pam_unix(sudo:session): session closed for user root
Nov 25 09:34:17 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063257.2666] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:34:17 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063257.2741] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:34:17 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063257.2745] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:34:17 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063257.2753] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 09:34:17 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063257.2762] device (eth0): Activation: successful, device activated.
Nov 25 09:34:17 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063257.2772] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 09:34:17 np0005534753.novalocal python3[7247]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-d194-dc36-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:34:27 np0005534753.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:34:38 np0005534753.novalocal systemd[4191]: Starting Mark boot as successful...
Nov 25 09:34:38 np0005534753.novalocal systemd[4191]: Finished Mark boot as successful.
Nov 25 09:34:46 np0005534753.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.3621] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 09:35:02 np0005534753.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:35:02 np0005534753.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.3894] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.3899] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.3920] device (eth1): Activation: successful, device activated.
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.3933] manager: startup complete
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.3936] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <warn>  [1764063302.3960] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.3976] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4196] dhcp4 (eth1): canceled DHCP transaction
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4197] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4197] dhcp4 (eth1): state changed no lease
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4213] policy: auto-activating connection 'ci-private-network' (c1249576-eed4-542c-bfdf-2a49ef515b96)
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4218] device (eth1): Activation: starting connection 'ci-private-network' (c1249576-eed4-542c-bfdf-2a49ef515b96)
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4220] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4223] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4233] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4242] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4282] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4283] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 09:35:02 np0005534753.novalocal NetworkManager[7199]: <info>  [1764063302.4293] device (eth1): Activation: successful, device activated.
Nov 25 09:35:12 np0005534753.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:35:17 np0005534753.novalocal sshd-session[4200]: Received disconnect from 38.102.83.114 port 47918:11: disconnected by user
Nov 25 09:35:17 np0005534753.novalocal sshd-session[4200]: Disconnected from user zuul 38.102.83.114 port 47918
Nov 25 09:35:17 np0005534753.novalocal sshd-session[4186]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:35:17 np0005534753.novalocal systemd-logind[822]: Session 1 logged out. Waiting for processes to exit.
Nov 25 09:35:20 np0005534753.novalocal sshd-session[7295]: Accepted publickey for zuul from 38.102.83.114 port 57386 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 09:35:20 np0005534753.novalocal systemd-logind[822]: New session 3 of user zuul.
Nov 25 09:35:20 np0005534753.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 25 09:35:20 np0005534753.novalocal sshd-session[7295]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:35:20 np0005534753.novalocal sudo[7374]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fysgptsglhdhkdsrklqvqbhoqbrmwmou ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 09:35:20 np0005534753.novalocal sudo[7374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:20 np0005534753.novalocal python3[7376]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:35:20 np0005534753.novalocal sudo[7374]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:20 np0005534753.novalocal sudo[7447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqvntfwkdagsoeqfftjiooeixxrkmluj ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 25 09:35:20 np0005534753.novalocal sudo[7447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:35:20 np0005534753.novalocal python3[7449]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063320.1757727-259-121265287420515/source _original_basename=tmppnuqxjyr follow=False checksum=1bcea5ca02805fa96f229f0998c9a7340b764f1f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:35:20 np0005534753.novalocal sudo[7447]: pam_unix(sudo:session): session closed for user root
Nov 25 09:35:23 np0005534753.novalocal sshd-session[7298]: Connection closed by 38.102.83.114 port 57386
Nov 25 09:35:23 np0005534753.novalocal sshd-session[7295]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:35:23 np0005534753.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 25 09:35:23 np0005534753.novalocal systemd-logind[822]: Session 3 logged out. Waiting for processes to exit.
Nov 25 09:35:23 np0005534753.novalocal systemd-logind[822]: Removed session 3.
Nov 25 09:36:30 np0005534753.novalocal sshd-session[7474]: Invalid user solv from 80.94.92.182 port 34048
Nov 25 09:36:31 np0005534753.novalocal sshd-session[7474]: Connection closed by invalid user solv 80.94.92.182 port 34048 [preauth]
Nov 25 09:37:16 np0005534753.novalocal sshd-session[7477]: Connection closed by authenticating user root 171.244.51.45 port 54464 [preauth]
Nov 25 09:37:38 np0005534753.novalocal systemd[4191]: Created slice User Background Tasks Slice.
Nov 25 09:37:38 np0005534753.novalocal systemd[4191]: Starting Cleanup of User's Temporary Files and Directories...
Nov 25 09:37:38 np0005534753.novalocal systemd[4191]: Finished Cleanup of User's Temporary Files and Directories.
Nov 25 09:38:59 np0005534753.novalocal sshd-session[7482]: Invalid user sol from 80.94.92.182 port 36692
Nov 25 09:38:59 np0005534753.novalocal sshd-session[7482]: Connection closed by invalid user sol 80.94.92.182 port 36692 [preauth]
Nov 25 09:39:11 np0005534753.novalocal sshd-session[7485]: error: kex_exchange_identification: read: Connection reset by peer
Nov 25 09:39:11 np0005534753.novalocal sshd-session[7485]: Connection reset by 45.140.17.97 port 6560
Nov 25 09:40:14 np0005534753.novalocal sshd-session[7487]: Accepted publickey for zuul from 38.102.83.114 port 50480 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 09:40:14 np0005534753.novalocal systemd-logind[822]: New session 4 of user zuul.
Nov 25 09:40:14 np0005534753.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 25 09:40:14 np0005534753.novalocal sshd-session[7487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:40:14 np0005534753.novalocal sudo[7514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ledxraltoxcewvvdxtlckbcqbhndzizt ; /usr/bin/python3'
Nov 25 09:40:14 np0005534753.novalocal sudo[7514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:14 np0005534753.novalocal python3[7516]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-0943-cfa5-000000001cd8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:15 np0005534753.novalocal sudo[7514]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:15 np0005534753.novalocal sudo[7543]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slvsfuxesesjsbpblgkfmxmbzowrihwr ; /usr/bin/python3'
Nov 25 09:40:15 np0005534753.novalocal sudo[7543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:15 np0005534753.novalocal python3[7545]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:15 np0005534753.novalocal sudo[7543]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:15 np0005534753.novalocal sudo[7569]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubfwkvnqrzuvkqfpjdcnlqvthojptrjt ; /usr/bin/python3'
Nov 25 09:40:15 np0005534753.novalocal sudo[7569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:15 np0005534753.novalocal python3[7571]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:15 np0005534753.novalocal sudo[7569]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:15 np0005534753.novalocal sudo[7595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwzlmnniapmepxurodrimzzuftyrwau ; /usr/bin/python3'
Nov 25 09:40:15 np0005534753.novalocal sudo[7595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:15 np0005534753.novalocal python3[7597]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:15 np0005534753.novalocal sudo[7595]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:15 np0005534753.novalocal sudo[7621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbghjqmbprdujdhjiubdegfzxqvzrved ; /usr/bin/python3'
Nov 25 09:40:15 np0005534753.novalocal sudo[7621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:16 np0005534753.novalocal python3[7623]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:16 np0005534753.novalocal sudo[7621]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:16 np0005534753.novalocal sudo[7647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdjbbhpkrrydvggnpdpqjcavqmgnmmab ; /usr/bin/python3'
Nov 25 09:40:16 np0005534753.novalocal sudo[7647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:16 np0005534753.novalocal python3[7649]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:16 np0005534753.novalocal sudo[7647]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:17 np0005534753.novalocal sudo[7725]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncxwucbledzgcxqcruprzyanvxfcxhli ; /usr/bin/python3'
Nov 25 09:40:17 np0005534753.novalocal sudo[7725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:17 np0005534753.novalocal python3[7727]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:40:17 np0005534753.novalocal sudo[7725]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:17 np0005534753.novalocal sudo[7798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vukwthhtkyooymomhxqkpvebgtfbrklj ; /usr/bin/python3'
Nov 25 09:40:17 np0005534753.novalocal sudo[7798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:17 np0005534753.novalocal python3[7800]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063616.9037554-484-9323748079641/source _original_basename=tmpb7we9j76 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:40:17 np0005534753.novalocal sudo[7798]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:18 np0005534753.novalocal sudo[7848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ardticjraetpkhtwxikqszxglozemwsc ; /usr/bin/python3'
Nov 25 09:40:18 np0005534753.novalocal sudo[7848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:18 np0005534753.novalocal python3[7850]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 09:40:18 np0005534753.novalocal systemd[1]: Reloading.
Nov 25 09:40:18 np0005534753.novalocal systemd-rc-local-generator[7868]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:40:18 np0005534753.novalocal sudo[7848]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:19 np0005534753.novalocal sudo[7904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwjwfzmnvxdgewrjruxofmmtnqqlzbh ; /usr/bin/python3'
Nov 25 09:40:19 np0005534753.novalocal sudo[7904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:20 np0005534753.novalocal python3[7906]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 25 09:40:20 np0005534753.novalocal sudo[7904]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:20 np0005534753.novalocal sudo[7930]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltgvpwqwkjvtnfvzwpxyqghdrcxoypg ; /usr/bin/python3'
Nov 25 09:40:20 np0005534753.novalocal sudo[7930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:20 np0005534753.novalocal python3[7932]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:20 np0005534753.novalocal sudo[7930]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:20 np0005534753.novalocal sudo[7958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpssgxxsxmaizqhlgnwyjcfyodljyjuj ; /usr/bin/python3'
Nov 25 09:40:20 np0005534753.novalocal sudo[7958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:20 np0005534753.novalocal python3[7960]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:20 np0005534753.novalocal sudo[7958]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:20 np0005534753.novalocal sudo[7986]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njwxfwufadghosqjkbatxbwoeocvheob ; /usr/bin/python3'
Nov 25 09:40:20 np0005534753.novalocal sudo[7986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:20 np0005534753.novalocal python3[7988]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:20 np0005534753.novalocal sudo[7986]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:21 np0005534753.novalocal sudo[8014]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezyswvsszqcbwdxwmtxnimqsklnihsq ; /usr/bin/python3'
Nov 25 09:40:21 np0005534753.novalocal sudo[8014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:21 np0005534753.novalocal python3[8016]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:21 np0005534753.novalocal sudo[8014]: pam_unix(sudo:session): session closed for user root
Nov 25 09:40:21 np0005534753.novalocal python3[8043]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-0943-cfa5-000000001cdf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:40:22 np0005534753.novalocal python3[8073]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 09:40:24 np0005534753.novalocal sshd-session[7490]: Connection closed by 38.102.83.114 port 50480
Nov 25 09:40:24 np0005534753.novalocal sshd-session[7487]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:40:24 np0005534753.novalocal systemd-logind[822]: Session 4 logged out. Waiting for processes to exit.
Nov 25 09:40:24 np0005534753.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 25 09:40:24 np0005534753.novalocal systemd[1]: session-4.scope: Consumed 4.044s CPU time.
Nov 25 09:40:24 np0005534753.novalocal systemd-logind[822]: Removed session 4.
Nov 25 09:40:25 np0005534753.novalocal sshd-session[8079]: Accepted publickey for zuul from 38.102.83.114 port 52032 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 09:40:25 np0005534753.novalocal systemd-logind[822]: New session 5 of user zuul.
Nov 25 09:40:25 np0005534753.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 25 09:40:25 np0005534753.novalocal sshd-session[8079]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:40:25 np0005534753.novalocal sudo[8106]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chuxymkellgyfykfmeqtbjszgwsoerwg ; /usr/bin/python3'
Nov 25 09:40:25 np0005534753.novalocal sudo[8106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:40:26 np0005534753.novalocal python3[8108]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:40:40 np0005534753.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:40:49 np0005534753.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:40:59 np0005534753.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:41:00 np0005534753.novalocal setsebool[8175]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 25 09:41:00 np0005534753.novalocal setsebool[8175]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 09:41:12 np0005534753.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 09:41:34 np0005534753.novalocal dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 09:41:34 np0005534753.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 09:41:34 np0005534753.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 25 09:41:34 np0005534753.novalocal systemd[1]: Reloading.
Nov 25 09:41:35 np0005534753.novalocal systemd-rc-local-generator[8924]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 09:41:35 np0005534753.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 09:41:36 np0005534753.novalocal sudo[8106]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:43 np0005534753.novalocal python3[14921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-8ed4-e57b-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:41:44 np0005534753.novalocal kernel: evm: overlay not supported
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: Starting D-Bus User Message Bus...
Nov 25 09:41:44 np0005534753.novalocal dbus-broker-launch[15443]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 25 09:41:44 np0005534753.novalocal dbus-broker-launch[15443]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: Started D-Bus User Message Bus.
Nov 25 09:41:44 np0005534753.novalocal dbus-broker-lau[15443]: Ready
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: Created slice Slice /user.
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: podman-15365.scope: unit configures an IP firewall, but not running as root.
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: (This warning is only shown for the first unit using IP firewalling.)
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: Started podman-15365.scope.
Nov 25 09:41:44 np0005534753.novalocal systemd[4191]: Started podman-pause-d68d7ae5.scope.
Nov 25 09:41:45 np0005534753.novalocal sudo[15890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdzffxiiikdvowwggybtixzxdfrtfiiv ; /usr/bin/python3'
Nov 25 09:41:45 np0005534753.novalocal sudo[15890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:41:45 np0005534753.novalocal python3[15902]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.65:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.65:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:41:45 np0005534753.novalocal python3[15902]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 25 09:41:45 np0005534753.novalocal sudo[15890]: pam_unix(sudo:session): session closed for user root
Nov 25 09:41:45 np0005534753.novalocal sshd-session[8082]: Connection closed by 38.102.83.114 port 52032
Nov 25 09:41:45 np0005534753.novalocal sshd-session[8079]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:41:45 np0005534753.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 25 09:41:45 np0005534753.novalocal systemd[1]: session-5.scope: Consumed 1min 3.408s CPU time.
Nov 25 09:41:45 np0005534753.novalocal systemd-logind[822]: Session 5 logged out. Waiting for processes to exit.
Nov 25 09:41:45 np0005534753.novalocal systemd-logind[822]: Removed session 5.
Nov 25 09:41:53 np0005534753.novalocal irqbalance[818]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 25 09:41:53 np0005534753.novalocal irqbalance[818]: IRQ 27 affinity is now unmanaged
Nov 25 09:42:20 np0005534753.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 09:42:20 np0005534753.novalocal systemd[1]: Finished man-db-cache-update.service.
Nov 25 09:42:20 np0005534753.novalocal systemd[1]: man-db-cache-update.service: Consumed 49.492s CPU time.
Nov 25 09:42:20 np0005534753.novalocal systemd[1]: run-r1ea34131004c416ca705a6593333dd0f.service: Deactivated successfully.
Nov 25 09:42:45 np0005534753.novalocal sshd-session[29594]: Unable to negotiate with 38.102.83.176 port 45844: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 09:42:45 np0005534753.novalocal sshd-session[29596]: Unable to negotiate with 38.102.83.176 port 45864: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 09:42:45 np0005534753.novalocal sshd-session[29597]: Connection closed by 38.102.83.176 port 45838 [preauth]
Nov 25 09:42:45 np0005534753.novalocal sshd-session[29595]: Unable to negotiate with 38.102.83.176 port 45860: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 09:42:45 np0005534753.novalocal sshd-session[29598]: Connection closed by 38.102.83.176 port 45828 [preauth]
Nov 25 09:42:49 np0005534753.novalocal sshd-session[29604]: Accepted publickey for zuul from 38.102.83.114 port 57760 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 09:42:49 np0005534753.novalocal systemd-logind[822]: New session 6 of user zuul.
Nov 25 09:42:49 np0005534753.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 25 09:42:49 np0005534753.novalocal sshd-session[29604]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:42:50 np0005534753.novalocal python3[29631]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBODm25P6Yj5x4NJQU/17g3EggEy4FQQ/uFqtM3oWpMx4ieCJsAG+h/kELdFjoGGhJvZGKpa15SV/0OYndPV1Dc8= zuul@np0005534752.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:42:50 np0005534753.novalocal sudo[29655]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoftyluigdvejawqsgrcwoasrfikopan ; /usr/bin/python3'
Nov 25 09:42:50 np0005534753.novalocal sudo[29655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:50 np0005534753.novalocal python3[29657]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBODm25P6Yj5x4NJQU/17g3EggEy4FQQ/uFqtM3oWpMx4ieCJsAG+h/kELdFjoGGhJvZGKpa15SV/0OYndPV1Dc8= zuul@np0005534752.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:42:50 np0005534753.novalocal sudo[29655]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:51 np0005534753.novalocal sudo[29681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkkleptosvwxatwsrwriznvjzbtylthq ; /usr/bin/python3'
Nov 25 09:42:51 np0005534753.novalocal sudo[29681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:51 np0005534753.novalocal python3[29683]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005534753.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 25 09:42:51 np0005534753.novalocal useradd[29685]: new group: name=cloud-admin, GID=1002
Nov 25 09:42:51 np0005534753.novalocal useradd[29685]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 25 09:42:51 np0005534753.novalocal sudo[29681]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:51 np0005534753.novalocal sudo[29715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diqeulyrvnkbtnbsizqqnryxtcgckrpc ; /usr/bin/python3'
Nov 25 09:42:51 np0005534753.novalocal sudo[29715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:51 np0005534753.novalocal python3[29717]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBODm25P6Yj5x4NJQU/17g3EggEy4FQQ/uFqtM3oWpMx4ieCJsAG+h/kELdFjoGGhJvZGKpa15SV/0OYndPV1Dc8= zuul@np0005534752.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 25 09:42:51 np0005534753.novalocal sudo[29715]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:51 np0005534753.novalocal sudo[29793]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idwgljaylfxncdhwgykvqgamjhbumtjy ; /usr/bin/python3'
Nov 25 09:42:51 np0005534753.novalocal sudo[29793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:52 np0005534753.novalocal python3[29795]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:42:52 np0005534753.novalocal sudo[29793]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:52 np0005534753.novalocal sudo[29866]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysivgifsomxbbtoooyuxvvcbjdzrrnuz ; /usr/bin/python3'
Nov 25 09:42:52 np0005534753.novalocal sudo[29866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:52 np0005534753.novalocal python3[29868]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764063771.8110774-139-77524310297509/source _original_basename=tmpork8b6mj follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:42:52 np0005534753.novalocal sudo[29866]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:53 np0005534753.novalocal sudo[29916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hksequzwdnixtgjecbzixjabhrjnikeh ; /usr/bin/python3'
Nov 25 09:42:53 np0005534753.novalocal sudo[29916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:42:53 np0005534753.novalocal python3[29918]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 25 09:42:53 np0005534753.novalocal systemd[1]: Starting Hostname Service...
Nov 25 09:42:53 np0005534753.novalocal systemd[1]: Started Hostname Service.
Nov 25 09:42:53 np0005534753.novalocal systemd-hostnamed[29922]: Changed pretty hostname to 'compute-0'
Nov 25 09:42:53 compute-0 systemd-hostnamed[29922]: Hostname set to <compute-0> (static)
Nov 25 09:42:53 compute-0 NetworkManager[7199]: <info>  [1764063773.4809] hostname: static hostname changed from "np0005534753.novalocal" to "compute-0"
Nov 25 09:42:53 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 09:42:53 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 09:42:53 compute-0 sudo[29916]: pam_unix(sudo:session): session closed for user root
Nov 25 09:42:53 compute-0 sshd-session[29607]: Connection closed by 38.102.83.114 port 57760
Nov 25 09:42:53 compute-0 sshd-session[29604]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:42:53 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 25 09:42:53 compute-0 systemd[1]: session-6.scope: Consumed 2.142s CPU time.
Nov 25 09:42:53 compute-0 systemd-logind[822]: Session 6 logged out. Waiting for processes to exit.
Nov 25 09:42:53 compute-0 systemd-logind[822]: Removed session 6.
Nov 25 09:43:03 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 09:43:23 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 09:43:50 compute-0 sshd-session[29939]: Connection closed by authenticating user root 171.244.51.45 port 32954 [preauth]
Nov 25 09:46:38 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 25 09:46:38 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 25 09:46:38 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 25 09:46:38 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 25 09:47:09 compute-0 sshd-session[29945]: Connection closed by authenticating user root 171.244.51.45 port 34502 [preauth]
Nov 25 09:48:06 compute-0 sshd-session[29948]: Accepted publickey for zuul from 38.102.83.176 port 45174 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 09:48:06 compute-0 systemd-logind[822]: New session 7 of user zuul.
Nov 25 09:48:06 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 25 09:48:06 compute-0 sshd-session[29948]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 09:48:07 compute-0 python3[30024]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 09:48:08 compute-0 sudo[30138]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfqoxkiteejfguhacetrsqqfqtsidgmh ; /usr/bin/python3'
Nov 25 09:48:08 compute-0 sudo[30138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:08 compute-0 python3[30140]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:48:08 compute-0 sudo[30138]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:08 compute-0 sudo[30211]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrffwydxexayhjfxmhdvdknngwmeqemc ; /usr/bin/python3'
Nov 25 09:48:08 compute-0 sudo[30211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:09 compute-0 python3[30213]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764064088.2459931-33720-155449734868166/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:09 compute-0 sudo[30211]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:09 compute-0 sudo[30237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wztgmdncwepktoqgxkuxgwcvclktawqp ; /usr/bin/python3'
Nov 25 09:48:09 compute-0 sudo[30237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:09 compute-0 python3[30239]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:48:09 compute-0 sudo[30237]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:09 compute-0 sudo[30310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyndgqmpwothoupaikxyfymdmjwydsap ; /usr/bin/python3'
Nov 25 09:48:09 compute-0 sudo[30310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:09 compute-0 python3[30312]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764064088.2459931-33720-155449734868166/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:09 compute-0 sudo[30310]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:09 compute-0 sudo[30336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvltdzvaclggatkcwbfjfzenzovgetzj ; /usr/bin/python3'
Nov 25 09:48:09 compute-0 sudo[30336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:09 compute-0 python3[30338]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:48:09 compute-0 sudo[30336]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:10 compute-0 sudo[30409]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bukflyjceuyplfrazbbmozawoaawhdtc ; /usr/bin/python3'
Nov 25 09:48:10 compute-0 sudo[30409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:10 compute-0 python3[30411]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764064088.2459931-33720-155449734868166/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:10 compute-0 sudo[30409]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:10 compute-0 sudo[30435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwbybhdwfptlrcrdwgjxszeeddqkqyxc ; /usr/bin/python3'
Nov 25 09:48:10 compute-0 sudo[30435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:10 compute-0 python3[30437]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:48:10 compute-0 sudo[30435]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:10 compute-0 sudo[30508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odtqtwpkbrdljqpmfnsawtgsfavensfg ; /usr/bin/python3'
Nov 25 09:48:10 compute-0 sudo[30508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:10 compute-0 python3[30510]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764064088.2459931-33720-155449734868166/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:10 compute-0 sudo[30508]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:10 compute-0 sudo[30534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jumjpynntvtnozeidgzmoeueekpofhcb ; /usr/bin/python3'
Nov 25 09:48:10 compute-0 sudo[30534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:11 compute-0 python3[30536]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:48:11 compute-0 sudo[30534]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:11 compute-0 sudo[30607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyodbwnjegrzhxunlcmvanouyexifogy ; /usr/bin/python3'
Nov 25 09:48:11 compute-0 sudo[30607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:11 compute-0 python3[30609]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764064088.2459931-33720-155449734868166/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:11 compute-0 sudo[30607]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:11 compute-0 sudo[30633]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-judnkskkbvviwodhtlgpzedqbxjasaja ; /usr/bin/python3'
Nov 25 09:48:11 compute-0 sudo[30633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:11 compute-0 python3[30635]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:48:11 compute-0 sudo[30633]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:11 compute-0 sudo[30706]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpgaxkgceuzupghibrpdjqkkeqsajrwr ; /usr/bin/python3'
Nov 25 09:48:11 compute-0 sudo[30706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:11 compute-0 python3[30708]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764064088.2459931-33720-155449734868166/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:11 compute-0 sudo[30706]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:11 compute-0 sudo[30732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkcvgcjtmwnthffobpugbzopsqzghehg ; /usr/bin/python3'
Nov 25 09:48:11 compute-0 sudo[30732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:12 compute-0 python3[30734]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 25 09:48:12 compute-0 sudo[30732]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:12 compute-0 sudo[30805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzxqzbxomhwwfpvzvounsrjnegvaclkr ; /usr/bin/python3'
Nov 25 09:48:12 compute-0 sudo[30805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 09:48:12 compute-0 python3[30807]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764064088.2459931-33720-155449734868166/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 09:48:12 compute-0 sudo[30805]: pam_unix(sudo:session): session closed for user root
Nov 25 09:48:15 compute-0 sshd-session[30833]: Unable to negotiate with 192.168.122.11 port 57244: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 25 09:48:15 compute-0 sshd-session[30832]: Connection closed by 192.168.122.11 port 57224 [preauth]
Nov 25 09:48:15 compute-0 sshd-session[30836]: Connection closed by 192.168.122.11 port 57210 [preauth]
Nov 25 09:48:15 compute-0 sshd-session[30834]: Unable to negotiate with 192.168.122.11 port 57232: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 25 09:48:15 compute-0 sshd-session[30835]: Unable to negotiate with 192.168.122.11 port 57240: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 25 09:50:28 compute-0 sshd-session[30843]: Connection closed by authenticating user root 171.244.51.45 port 38166 [preauth]
Nov 25 09:50:59 compute-0 python3[30868]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 09:53:53 compute-0 sshd-session[30873]: Connection closed by authenticating user root 171.244.51.45 port 56230 [preauth]
Nov 25 09:54:56 compute-0 sshd-session[30875]: Connection closed by 148.113.206.49 port 48492 [preauth]
Nov 25 09:55:58 compute-0 sshd-session[29951]: Received disconnect from 38.102.83.176 port 45174:11: disconnected by user
Nov 25 09:55:58 compute-0 sshd-session[29951]: Disconnected from user zuul 38.102.83.176 port 45174
Nov 25 09:55:58 compute-0 sshd-session[29948]: pam_unix(sshd:session): session closed for user zuul
Nov 25 09:55:58 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 25 09:55:58 compute-0 systemd[1]: session-7.scope: Consumed 4.765s CPU time.
Nov 25 09:55:58 compute-0 systemd-logind[822]: Session 7 logged out. Waiting for processes to exit.
Nov 25 09:55:58 compute-0 systemd-logind[822]: Removed session 7.
Nov 25 09:57:16 compute-0 sshd-session[30879]: Connection closed by authenticating user root 171.244.51.45 port 53312 [preauth]
Nov 25 10:00:47 compute-0 sshd-session[30883]: Connection closed by authenticating user root 171.244.51.45 port 49408 [preauth]
Nov 25 10:01:01 compute-0 CROND[30886]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 10:01:01 compute-0 run-parts[30889]: (/etc/cron.hourly) starting 0anacron
Nov 25 10:01:01 compute-0 anacron[30897]: Anacron started on 2025-11-25
Nov 25 10:01:01 compute-0 anacron[30897]: Will run job `cron.daily' in 26 min.
Nov 25 10:01:01 compute-0 anacron[30897]: Will run job `cron.weekly' in 46 min.
Nov 25 10:01:01 compute-0 anacron[30897]: Will run job `cron.monthly' in 66 min.
Nov 25 10:01:01 compute-0 anacron[30897]: Jobs will be executed sequentially
Nov 25 10:01:01 compute-0 run-parts[30899]: (/etc/cron.hourly) finished 0anacron
Nov 25 10:01:01 compute-0 CROND[30885]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 10:03:50 compute-0 sshd-session[30900]: Accepted publickey for zuul from 192.168.122.30 port 41036 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:03:51 compute-0 systemd-logind[822]: New session 8 of user zuul.
Nov 25 10:03:51 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 25 10:03:51 compute-0 sshd-session[30900]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:03:52 compute-0 python3.9[31053]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:03:53 compute-0 sudo[31232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qotjexenuxfnsufxurjoqsflfbpfxqmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065032.9437277-32-122148314731621/AnsiballZ_command.py'
Nov 25 10:03:53 compute-0 sudo[31232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:03:53 compute-0 python3.9[31234]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:04:01 compute-0 sudo[31232]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:02 compute-0 sshd-session[30903]: Connection closed by 192.168.122.30 port 41036
Nov 25 10:04:02 compute-0 sshd-session[30900]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:02 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 25 10:04:02 compute-0 systemd[1]: session-8.scope: Consumed 8.514s CPU time.
Nov 25 10:04:02 compute-0 systemd-logind[822]: Session 8 logged out. Waiting for processes to exit.
Nov 25 10:04:02 compute-0 systemd-logind[822]: Removed session 8.
Nov 25 10:04:08 compute-0 sshd-session[31292]: Accepted publickey for zuul from 192.168.122.30 port 37160 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:04:08 compute-0 systemd-logind[822]: New session 9 of user zuul.
Nov 25 10:04:08 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 25 10:04:08 compute-0 sshd-session[31292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:04:09 compute-0 python3.9[31445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:04:09 compute-0 sshd-session[31295]: Connection closed by 192.168.122.30 port 37160
Nov 25 10:04:09 compute-0 sshd-session[31292]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:04:09 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 25 10:04:09 compute-0 systemd-logind[822]: Session 9 logged out. Waiting for processes to exit.
Nov 25 10:04:09 compute-0 systemd-logind[822]: Removed session 9.
Nov 25 10:04:12 compute-0 sshd-session[31473]: Connection closed by authenticating user root 171.244.51.45 port 36496 [preauth]
Nov 25 10:04:25 compute-0 sshd-session[31476]: Accepted publickey for zuul from 192.168.122.30 port 36056 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:04:25 compute-0 systemd-logind[822]: New session 10 of user zuul.
Nov 25 10:04:25 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 25 10:04:25 compute-0 sshd-session[31476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:04:26 compute-0 python3.9[31629]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 25 10:04:27 compute-0 python3.9[31803]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:04:28 compute-0 sudo[31953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxjwdxoxnerzgfywbrlhynvqteygkcoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065067.5761037-45-161447018329944/AnsiballZ_command.py'
Nov 25 10:04:28 compute-0 sudo[31953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:28 compute-0 python3.9[31955]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:04:28 compute-0 sudo[31953]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:28 compute-0 sudo[32106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zodijtdyziszendunmcydidiaopwinsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065068.4980228-57-126065126516848/AnsiballZ_stat.py'
Nov 25 10:04:28 compute-0 sudo[32106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:29 compute-0 python3.9[32108]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:04:29 compute-0 sudo[32106]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:29 compute-0 sudo[32258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vewhemgxmsjwaiyzqmlifmtzjljueqpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065069.2260532-65-232474793090618/AnsiballZ_file.py'
Nov 25 10:04:29 compute-0 sudo[32258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:29 compute-0 python3.9[32260]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:04:29 compute-0 sudo[32258]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:30 compute-0 sudo[32410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pboqazlogpoczubcupjyidusthfpoykg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065070.0768752-73-231778224470237/AnsiballZ_stat.py'
Nov 25 10:04:30 compute-0 sudo[32410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:30 compute-0 python3.9[32412]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:04:30 compute-0 sudo[32410]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:31 compute-0 sudo[32533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgobjpdjjqxhefkfxjgcwnofegzfsjju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065070.0768752-73-231778224470237/AnsiballZ_copy.py'
Nov 25 10:04:31 compute-0 sudo[32533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:31 compute-0 python3.9[32535]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065070.0768752-73-231778224470237/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:04:31 compute-0 sudo[32533]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:31 compute-0 sudo[32685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swalyokqehqwoqnotntgrkcyivruyujs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065071.4735596-88-83326942762278/AnsiballZ_setup.py'
Nov 25 10:04:31 compute-0 sudo[32685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:32 compute-0 python3.9[32687]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:04:32 compute-0 sudo[32685]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:32 compute-0 sudo[32841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwkqdopjvxllwpohicfwkmbrmfhdcxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065072.3971214-96-238864125224248/AnsiballZ_file.py'
Nov 25 10:04:32 compute-0 sudo[32841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:32 compute-0 python3.9[32843]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:04:32 compute-0 sudo[32841]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:33 compute-0 sudo[32993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugdhefxwljfhvzpqjxyjeeqmvfiqreup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065073.0904577-105-65806378869395/AnsiballZ_file.py'
Nov 25 10:04:33 compute-0 sudo[32993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:33 compute-0 python3.9[32995]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:04:33 compute-0 sudo[32993]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:34 compute-0 python3.9[33145]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:04:39 compute-0 python3.9[33398]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:04:39 compute-0 python3.9[33548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:04:40 compute-0 python3.9[33702]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:04:41 compute-0 sudo[33858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfvykikhubcssajifutkbnktqimrzhwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065081.2505789-153-77245642661891/AnsiballZ_setup.py'
Nov 25 10:04:41 compute-0 sudo[33858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:41 compute-0 python3.9[33860]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:04:42 compute-0 sudo[33858]: pam_unix(sudo:session): session closed for user root
Nov 25 10:04:42 compute-0 sudo[33942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzabkadoyddjvlycmaldfdlkdetljcpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065081.2505789-153-77245642661891/AnsiballZ_dnf.py'
Nov 25 10:04:42 compute-0 sudo[33942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:04:42 compute-0 python3.9[33944]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:05:33 compute-0 systemd[1]: Reloading.
Nov 25 10:05:33 compute-0 systemd-rc-local-generator[34140]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:05:33 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 25 10:05:34 compute-0 systemd[1]: Reloading.
Nov 25 10:05:34 compute-0 systemd-rc-local-generator[34184]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:05:34 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 25 10:05:34 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 25 10:05:34 compute-0 systemd[1]: Reloading.
Nov 25 10:05:34 compute-0 systemd-rc-local-generator[34226]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:05:34 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 25 10:05:35 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 10:05:35 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 10:05:35 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 10:06:38 compute-0 kernel: SELinux:  Converting 2717 SID table entries...
Nov 25 10:06:38 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Nov 25 10:06:38 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:06:38 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:06:38 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:06:38 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:06:38 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:06:38 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:06:38 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:06:38 compute-0 systemd[1]: Starting dnf makecache...
Nov 25 10:06:39 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 25 10:06:39 compute-0 dnf[34513]: Failed determining last makecache time.
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-barbican-42b4c41831408a8e323 126 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 196 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-cinder-1c00d6490d88e436f26ef 158 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-python-stevedore-c4acc5639fd2329372142 215 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-python-observabilityclient-2f31846d73c 208 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-os-net-config-bbae2ed8a159b0435a473f38 210 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 177 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-python-designate-tests-tempest-347fdbc 185 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-glance-1fd12c29b339f30fe823e 188 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 198 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-manila-3c01b7181572c95dac462 211 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-python-whitebox-neutron-tests-tempest- 202 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-octavia-ba397f07a7331190208c 195 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-watcher-c014f81a8647287f6dcc 198 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 systemd[1]: Reloading.
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-python-tcib-1124124ec06aadbac34f0d340b 188 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 194 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-swift-dc98a8463506ac520c469a 183 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 systemd-rc-local-generator[34570]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-python-tempestconf-8515371b7cceebd4282 112 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: delorean-openstack-heat-ui-013accbfd179753bc3f0 148 kB/s | 3.0 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: CentOS Stream 9 - BaseOS                         60 kB/s | 5.4 kB     00:00
Nov 25 10:06:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:06:39 compute-0 dnf[34513]: CentOS Stream 9 - AppStream                      52 kB/s | 6.1 kB     00:00
Nov 25 10:06:39 compute-0 dnf[34513]: CentOS Stream 9 - CRB                            47 kB/s | 5.3 kB     00:00
Nov 25 10:06:40 compute-0 sudo[33942]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:40 compute-0 dnf[34513]: CentOS Stream 9 - Extras packages                72 kB/s | 8.3 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: dlrn-antelope-testing                           119 kB/s | 3.0 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: dlrn-antelope-build-deps                        132 kB/s | 3.0 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: centos9-rabbitmq                                111 kB/s | 3.0 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: centos9-storage                                 111 kB/s | 3.0 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: centos9-opstools                                102 kB/s | 3.0 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: NFV SIG OpenvSwitch                             119 kB/s | 3.0 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: repo-setup-centos-appstream                     182 kB/s | 4.4 kB     00:00
Nov 25 10:06:40 compute-0 dnf[34513]: repo-setup-centos-baseos                        105 kB/s | 3.9 kB     00:00
Nov 25 10:06:40 compute-0 sudo[35495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzdpdqqvddpjhrwazhpvfsvktmbwudfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065200.1851616-165-79554015484749/AnsiballZ_command.py'
Nov 25 10:06:40 compute-0 sudo[35495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:40 compute-0 dnf[34513]: repo-setup-centos-highavailability              158 kB/s | 3.9 kB     00:00
Nov 25 10:06:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:06:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:06:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.405s CPU time.
Nov 25 10:06:40 compute-0 systemd[1]: run-rc13afc51d8254ed39c0ddba3e4a22ec2.service: Deactivated successfully.
Nov 25 10:06:40 compute-0 dnf[34513]: repo-setup-centos-powertools                    202 kB/s | 4.3 kB     00:00
Nov 25 10:06:40 compute-0 python3.9[35497]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:06:40 compute-0 dnf[34513]: Extra Packages for Enterprise Linux 9 - x86_64  249 kB/s |  31 kB     00:00
Nov 25 10:06:41 compute-0 dnf[34513]: Metadata cache created.
Nov 25 10:06:41 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 25 10:06:41 compute-0 systemd[1]: Finished dnf makecache.
Nov 25 10:06:41 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.866s CPU time.
Nov 25 10:06:41 compute-0 sudo[35495]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:42 compute-0 sudo[35782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nczjrvniswgzlhlzcmsxyotrpjtewwdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065202.0822184-173-166599535074525/AnsiballZ_selinux.py'
Nov 25 10:06:42 compute-0 sudo[35782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:43 compute-0 python3.9[35784]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 25 10:06:43 compute-0 sudo[35782]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:43 compute-0 sudo[35934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zugqnnvldqsaalxefqfmicnsbzkcsrxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065203.457817-184-165221544300123/AnsiballZ_command.py'
Nov 25 10:06:43 compute-0 sudo[35934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:44 compute-0 python3.9[35936]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 25 10:06:44 compute-0 sudo[35934]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:45 compute-0 sudo[36087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjczvlmjcadbnhmgjrfsnonewitojkmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065205.1380758-192-237829104438487/AnsiballZ_file.py'
Nov 25 10:06:45 compute-0 sudo[36087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:46 compute-0 python3.9[36089]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:06:46 compute-0 sudo[36087]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:46 compute-0 sudo[36239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thzuvglcpbtvcpkvuummybhdzfbhkixi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065206.2622225-200-259389789603787/AnsiballZ_mount.py'
Nov 25 10:06:46 compute-0 sudo[36239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:46 compute-0 python3.9[36241]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 25 10:06:46 compute-0 sudo[36239]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:48 compute-0 sudo[36391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neiifjxbeeolxhvdgsvsbioczoppzblr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065207.688358-228-58286462308799/AnsiballZ_file.py'
Nov 25 10:06:48 compute-0 sudo[36391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:48 compute-0 python3.9[36393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:06:48 compute-0 sudo[36391]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:48 compute-0 sudo[36543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpqdnyvrdlebrxikqivlwtbpxmdkanfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065208.4779425-236-90558484239739/AnsiballZ_stat.py'
Nov 25 10:06:48 compute-0 sudo[36543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:49 compute-0 python3.9[36545]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:06:49 compute-0 sudo[36543]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:49 compute-0 sudo[36666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgaqfocdzsgclkgeytyznjcyudbemyjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065208.4779425-236-90558484239739/AnsiballZ_copy.py'
Nov 25 10:06:49 compute-0 sudo[36666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:49 compute-0 python3.9[36668]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065208.4779425-236-90558484239739/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:06:49 compute-0 sudo[36666]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:50 compute-0 sudo[36818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egmmgwnwbmqxqvqyhjrjjxxnibjammsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065210.2094545-260-20490686690507/AnsiballZ_stat.py'
Nov 25 10:06:50 compute-0 sudo[36818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:53 compute-0 python3.9[36820]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:06:53 compute-0 sudo[36818]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:53 compute-0 sudo[36970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cifyhaxthrhldffccygwkwftkisodcba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065213.3745823-268-149744630724868/AnsiballZ_command.py'
Nov 25 10:06:53 compute-0 sudo[36970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:53 compute-0 python3.9[36972]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:06:54 compute-0 sudo[36970]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:54 compute-0 sudo[37123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvrsrhahqaftzxntwrpwskmcccgxaoze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065214.2222948-276-174831071669764/AnsiballZ_file.py'
Nov 25 10:06:54 compute-0 sudo[37123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:54 compute-0 python3.9[37125]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:06:54 compute-0 sudo[37123]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:55 compute-0 sudo[37275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahllztmzuyqqpjyhllppsawaqrlifuaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065215.1928668-287-254884778818691/AnsiballZ_getent.py'
Nov 25 10:06:55 compute-0 sudo[37275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:55 compute-0 python3.9[37277]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 25 10:06:55 compute-0 sudo[37275]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:55 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:06:56 compute-0 sudo[37429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itiiweqxlmwgnatmodfbowxxhdlhgmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065216.1881118-295-165983888337711/AnsiballZ_group.py'
Nov 25 10:06:56 compute-0 sudo[37429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:56 compute-0 python3.9[37431]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:06:56 compute-0 groupadd[37432]: group added to /etc/group: name=qemu, GID=107
Nov 25 10:06:56 compute-0 groupadd[37432]: group added to /etc/gshadow: name=qemu
Nov 25 10:06:56 compute-0 groupadd[37432]: new group: name=qemu, GID=107
Nov 25 10:06:56 compute-0 sudo[37429]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:57 compute-0 sudo[37587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofuwivqzpxjjyqcyjskgiccjanwlmbzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065217.14311-303-198311064851449/AnsiballZ_user.py'
Nov 25 10:06:57 compute-0 sudo[37587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:57 compute-0 python3.9[37589]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 10:06:57 compute-0 useradd[37591]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 10:06:58 compute-0 sudo[37587]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:58 compute-0 sudo[37747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sptmobgwgvgulxhfnxlmspkzsfxegnxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065218.2704434-311-72376097182179/AnsiballZ_getent.py'
Nov 25 10:06:58 compute-0 sudo[37747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:58 compute-0 python3.9[37749]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 25 10:06:58 compute-0 sudo[37747]: pam_unix(sudo:session): session closed for user root
Nov 25 10:06:59 compute-0 sudo[37900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxcdgqqmmfdmsrdsuwxghlocvceasob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065219.0017753-319-206534621377703/AnsiballZ_group.py'
Nov 25 10:06:59 compute-0 sudo[37900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:06:59 compute-0 python3.9[37902]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:06:59 compute-0 groupadd[37903]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 25 10:06:59 compute-0 groupadd[37903]: group added to /etc/gshadow: name=hugetlbfs
Nov 25 10:06:59 compute-0 groupadd[37903]: new group: name=hugetlbfs, GID=42477
Nov 25 10:06:59 compute-0 sudo[37900]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:00 compute-0 sudo[38058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlerrbfjrgawcyilrrnwxlupbhafebhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065219.7969968-328-102008907326366/AnsiballZ_file.py'
Nov 25 10:07:00 compute-0 sudo[38058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:00 compute-0 python3.9[38060]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 25 10:07:00 compute-0 sudo[38058]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:01 compute-0 sudo[38210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsroaeerrrhbtfiiwoezvnknstqesdki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065220.6749547-339-163866119394637/AnsiballZ_dnf.py'
Nov 25 10:07:01 compute-0 sudo[38210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:01 compute-0 python3.9[38212]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:07:02 compute-0 sudo[38210]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:03 compute-0 sudo[38363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziscznfsqhicinjxwriridumuxyuxmqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065223.0219877-347-194678998171337/AnsiballZ_file.py'
Nov 25 10:07:03 compute-0 sudo[38363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:03 compute-0 python3.9[38365]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:03 compute-0 sudo[38363]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:04 compute-0 sudo[38515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujbwstqxqcznygumzqiyulueotuxqbhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065223.7999382-355-111369861835712/AnsiballZ_stat.py'
Nov 25 10:07:04 compute-0 sudo[38515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:04 compute-0 python3.9[38517]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:07:04 compute-0 sudo[38515]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:04 compute-0 sudo[38638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khmrumjzvbghpqkhjlhsyakqmepyohby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065223.7999382-355-111369861835712/AnsiballZ_copy.py'
Nov 25 10:07:04 compute-0 sudo[38638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:04 compute-0 python3.9[38640]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065223.7999382-355-111369861835712/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:05 compute-0 sudo[38638]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:05 compute-0 sudo[38790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arjhvaccdpgwzrffqenvdsbyrvzdcfhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065225.211316-370-165738849900448/AnsiballZ_systemd.py'
Nov 25 10:07:05 compute-0 sudo[38790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:06 compute-0 python3.9[38792]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:07:06 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 10:07:06 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 25 10:07:06 compute-0 kernel: Bridge firewalling registered
Nov 25 10:07:06 compute-0 systemd-modules-load[38796]: Inserted module 'br_netfilter'
Nov 25 10:07:06 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 10:07:06 compute-0 sudo[38790]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:06 compute-0 sudo[38952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctoeiulxorcbfjxxtzyzrtvewnjwzdvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065226.591924-378-206480854281827/AnsiballZ_stat.py'
Nov 25 10:07:06 compute-0 sudo[38952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:07 compute-0 python3.9[38954]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:07:07 compute-0 sudo[38952]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:07 compute-0 sudo[39075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uthekzvgmsszfejnhrqcejuucsxcojzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065226.591924-378-206480854281827/AnsiballZ_copy.py'
Nov 25 10:07:07 compute-0 sudo[39075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:07 compute-0 python3.9[39077]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065226.591924-378-206480854281827/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:07 compute-0 sudo[39075]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:08 compute-0 sudo[39227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzlhgbtjvznnsbgsfrtowukgtfnjhjla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065227.9663937-396-154061695394377/AnsiballZ_dnf.py'
Nov 25 10:07:08 compute-0 sudo[39227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:08 compute-0 python3.9[39229]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:07:13 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 10:07:13 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 10:07:13 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:07:13 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:07:13 compute-0 systemd[1]: Reloading.
Nov 25 10:07:13 compute-0 systemd-rc-local-generator[39291]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:07:13 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:07:14 compute-0 sudo[39227]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:15 compute-0 python3.9[40433]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:07:16 compute-0 python3.9[41329]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 25 10:07:16 compute-0 python3.9[42163]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:07:17 compute-0 sudo[42967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjfnoxiackvgyujfjxstyhvxowyrpggc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065236.890975-435-70851340133418/AnsiballZ_command.py'
Nov 25 10:07:17 compute-0 sudo[42967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:17 compute-0 python3.9[42991]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:07:17 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 10:07:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:07:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:07:17 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.233s CPU time.
Nov 25 10:07:17 compute-0 systemd[1]: run-r068ac449e92e4e4499f24fdd5c61d4e0.service: Deactivated successfully.
Nov 25 10:07:17 compute-0 systemd[1]: Starting Authorization Manager...
Nov 25 10:07:17 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 10:07:18 compute-0 polkitd[43613]: Started polkitd version 0.117
Nov 25 10:07:18 compute-0 polkitd[43613]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 10:07:18 compute-0 polkitd[43613]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 10:07:18 compute-0 polkitd[43613]: Finished loading, compiling and executing 2 rules
Nov 25 10:07:18 compute-0 polkitd[43613]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 25 10:07:18 compute-0 systemd[1]: Started Authorization Manager.
Nov 25 10:07:18 compute-0 sudo[42967]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:18 compute-0 sudo[43781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myflbdhchcdzkmvwrcucqtpnzmopkbxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065238.3296504-444-256301069196481/AnsiballZ_systemd.py'
Nov 25 10:07:18 compute-0 sudo[43781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:18 compute-0 python3.9[43783]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:07:19 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 25 10:07:20 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 25 10:07:20 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 25 10:07:20 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 25 10:07:20 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 25 10:07:20 compute-0 sudo[43781]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:21 compute-0 python3.9[43945]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 25 10:07:22 compute-0 sudo[44095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czyrdxadmpzgaqmprseialolpfexkskf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065242.5059333-501-97530847025025/AnsiballZ_systemd.py'
Nov 25 10:07:22 compute-0 sudo[44095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:23 compute-0 python3.9[44097]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:07:23 compute-0 systemd[1]: Reloading.
Nov 25 10:07:23 compute-0 systemd-rc-local-generator[44125]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:07:23 compute-0 sudo[44095]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:23 compute-0 sudo[44283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyuwmprqllhzqmxwbcotkgnegvmddsba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065243.5715375-501-134368616722406/AnsiballZ_systemd.py'
Nov 25 10:07:23 compute-0 sudo[44283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:24 compute-0 python3.9[44285]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:07:24 compute-0 systemd[1]: Reloading.
Nov 25 10:07:24 compute-0 systemd-rc-local-generator[44309]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:07:24 compute-0 sudo[44283]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:25 compute-0 sudo[44473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-celpqhewfyqqljwzsquhauithsgykbkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065244.7824972-517-203730156291518/AnsiballZ_command.py'
Nov 25 10:07:25 compute-0 sudo[44473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:25 compute-0 python3.9[44475]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:07:25 compute-0 sudo[44473]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:25 compute-0 sudo[44626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujlyooiiuylrsxpicdkmbksbhtclyrbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065245.500362-525-163981728755188/AnsiballZ_command.py'
Nov 25 10:07:25 compute-0 sudo[44626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:26 compute-0 python3.9[44628]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:07:26 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 25 10:07:26 compute-0 sudo[44626]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:26 compute-0 sudo[44779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddixtyqbirdykxxciqevmpnviybjcihw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065246.2339823-533-172381570990051/AnsiballZ_command.py'
Nov 25 10:07:26 compute-0 sudo[44779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:26 compute-0 python3.9[44781]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:07:28 compute-0 sudo[44779]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:28 compute-0 sudo[44941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqojfyhvzcifxcjbjywypamqqutwfehu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065248.4556346-541-151548711644102/AnsiballZ_command.py'
Nov 25 10:07:28 compute-0 sudo[44941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:28 compute-0 python3.9[44943]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:07:29 compute-0 sudo[44941]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:29 compute-0 sudo[45094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smqhsknvgbhylabjgeqdtvmbcfxzzedi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065249.1941998-549-114906337023484/AnsiballZ_systemd.py'
Nov 25 10:07:29 compute-0 sudo[45094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:29 compute-0 python3.9[45096]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:07:29 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 25 10:07:29 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 25 10:07:29 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 25 10:07:29 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 25 10:07:29 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 25 10:07:29 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 25 10:07:29 compute-0 sudo[45094]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:30 compute-0 sshd-session[31479]: Connection closed by 192.168.122.30 port 36056
Nov 25 10:07:30 compute-0 sshd-session[31476]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:07:30 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 25 10:07:30 compute-0 systemd[1]: session-10.scope: Consumed 2min 19.908s CPU time.
Nov 25 10:07:30 compute-0 systemd-logind[822]: Session 10 logged out. Waiting for processes to exit.
Nov 25 10:07:30 compute-0 systemd-logind[822]: Removed session 10.
Nov 25 10:07:35 compute-0 sshd-session[45126]: Accepted publickey for zuul from 192.168.122.30 port 52902 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:07:35 compute-0 systemd-logind[822]: New session 11 of user zuul.
Nov 25 10:07:35 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 25 10:07:35 compute-0 sshd-session[45126]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:07:36 compute-0 python3.9[45279]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:07:37 compute-0 python3.9[45433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:07:38 compute-0 sudo[45587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmhwffsuxtzndnkkbfilhkzygsyofzdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065258.0129418-50-112639990858401/AnsiballZ_command.py'
Nov 25 10:07:38 compute-0 sudo[45587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:38 compute-0 python3.9[45589]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:07:38 compute-0 sudo[45587]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:39 compute-0 python3.9[45742]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:07:40 compute-0 sshd-session[45615]: Connection closed by authenticating user root 171.244.51.45 port 47226 [preauth]
Nov 25 10:07:40 compute-0 sudo[45896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oruzaknquuadnpmteyjjfzsvzdkhkdyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065259.9421217-70-66583916375791/AnsiballZ_setup.py'
Nov 25 10:07:40 compute-0 sudo[45896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:40 compute-0 python3.9[45898]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:07:40 compute-0 sudo[45896]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:41 compute-0 sudo[45980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oslqttpkzaynbrhmosfsqxvgtfiavmwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065259.9421217-70-66583916375791/AnsiballZ_dnf.py'
Nov 25 10:07:41 compute-0 sudo[45980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:41 compute-0 python3.9[45982]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:07:42 compute-0 sudo[45980]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:43 compute-0 sudo[46133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivbagynrrujaagneigxrqvgovknorqed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065262.83156-82-22827924760140/AnsiballZ_setup.py'
Nov 25 10:07:43 compute-0 sudo[46133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:43 compute-0 python3.9[46135]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:07:43 compute-0 sudo[46133]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:44 compute-0 sudo[46304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vogphimltctmtelkrtnpyeqeznazdroz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065263.7318735-93-115420124437724/AnsiballZ_file.py'
Nov 25 10:07:44 compute-0 sudo[46304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:44 compute-0 python3.9[46306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:07:44 compute-0 sudo[46304]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:44 compute-0 sudo[46456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-magggzqpidiywxlqtqhmopvatpuiyioz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065264.584048-101-1245441340282/AnsiballZ_command.py'
Nov 25 10:07:44 compute-0 sudo[46456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:45 compute-0 python3.9[46458]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3760633539-merged.mount: Deactivated successfully.
Nov 25 10:07:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1943977458-merged.mount: Deactivated successfully.
Nov 25 10:07:45 compute-0 podman[46459]: 2025-11-25 10:07:45.176978797 +0000 UTC m=+0.051187004 system refresh
Nov 25 10:07:45 compute-0 sudo[46456]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:45 compute-0 sudo[46619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgltvyrljnirtedpmdxarxtfeimucghz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065265.3829267-109-182261090568388/AnsiballZ_stat.py'
Nov 25 10:07:45 compute-0 sudo[46619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:46 compute-0 python3.9[46621]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:07:46 compute-0 sudo[46619]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:07:46 compute-0 sudo[46742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mireougnabeocgfdmxbpfnwhubymezta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065265.3829267-109-182261090568388/AnsiballZ_copy.py'
Nov 25 10:07:46 compute-0 sudo[46742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:46 compute-0 python3.9[46744]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065265.3829267-109-182261090568388/.source.json follow=False _original_basename=podman_network_config.j2 checksum=20883ce7180170bedd525ec1156cc676e98b86be backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:07:46 compute-0 sudo[46742]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:47 compute-0 sudo[46894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-limcqojoiiixcwsjakpfqrvrofjofnci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065266.9032516-124-62707327736070/AnsiballZ_stat.py'
Nov 25 10:07:47 compute-0 sudo[46894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:47 compute-0 python3.9[46896]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:07:47 compute-0 sudo[46894]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:47 compute-0 sudo[47017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjmuclkqzwikrtvgccqvlvgxfzzuqwqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065266.9032516-124-62707327736070/AnsiballZ_copy.py'
Nov 25 10:07:47 compute-0 sudo[47017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:47 compute-0 python3.9[47019]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065266.9032516-124-62707327736070/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c7e24e791b23b6ca9af1b87173047a0fb53188da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:47 compute-0 sudo[47017]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:48 compute-0 sudo[47169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjqjfvxztucuztjqgugqrluqnwmeckaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065268.157933-140-265278798243516/AnsiballZ_ini_file.py'
Nov 25 10:07:48 compute-0 sudo[47169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:48 compute-0 python3.9[47171]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:48 compute-0 sudo[47169]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:49 compute-0 sudo[47321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmmjmzwndbwhiavvxdivsjphsucoufcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065268.895935-140-149074567044951/AnsiballZ_ini_file.py'
Nov 25 10:07:49 compute-0 sudo[47321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:49 compute-0 python3.9[47323]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:49 compute-0 sudo[47321]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:49 compute-0 sudo[47473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkbfugazwraryxhbgkxfqivcvilcgzfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065269.6168869-140-35781541916146/AnsiballZ_ini_file.py'
Nov 25 10:07:49 compute-0 sudo[47473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:50 compute-0 python3.9[47475]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:50 compute-0 sudo[47473]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:50 compute-0 sudo[47625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsdzofmyiasimtujaqtilazcwqsylkth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065270.3064392-140-124004472714554/AnsiballZ_ini_file.py'
Nov 25 10:07:50 compute-0 sudo[47625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:50 compute-0 python3.9[47627]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:07:50 compute-0 sudo[47625]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:51 compute-0 python3.9[47777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:07:52 compute-0 sudo[47929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrefshdgpbnlmdxnrrbsascwzzmafwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065272.103284-180-155744003593485/AnsiballZ_dnf.py'
Nov 25 10:07:52 compute-0 sudo[47929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:52 compute-0 python3.9[47931]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:07:53 compute-0 sudo[47929]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:54 compute-0 sudo[48082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhyqdgvaftnlejnbbrnfxqzvlwocamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065274.0399802-188-150851623417141/AnsiballZ_dnf.py'
Nov 25 10:07:54 compute-0 sudo[48082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:54 compute-0 python3.9[48084]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:07:57 compute-0 sudo[48082]: pam_unix(sudo:session): session closed for user root
Nov 25 10:07:58 compute-0 sudo[48243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewqwefuhfwfcbjexlgpyuegbkoryquv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065278.2344177-198-113368705049199/AnsiballZ_dnf.py'
Nov 25 10:07:58 compute-0 sudo[48243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:07:58 compute-0 python3.9[48245]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:07:59 compute-0 sudo[48243]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:00 compute-0 sudo[48396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrjttqfnaxghgjuvgnblbtjjaxarutig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065280.207479-207-144672478216142/AnsiballZ_dnf.py'
Nov 25 10:08:00 compute-0 sudo[48396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:00 compute-0 python3.9[48398]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:08:02 compute-0 sudo[48396]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:02 compute-0 sudo[48549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxruxoeckzskxkstfeohsxatjkfbwstc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065282.554592-218-131799602617977/AnsiballZ_dnf.py'
Nov 25 10:08:02 compute-0 sudo[48549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:03 compute-0 python3.9[48551]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:08:04 compute-0 sudo[48549]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:05 compute-0 sudo[48705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qubwcpcbagdetwrjndtyfkwylsadxlmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065285.4993577-226-243338101983856/AnsiballZ_dnf.py'
Nov 25 10:08:05 compute-0 sudo[48705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:06 compute-0 python3.9[48707]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:08:08 compute-0 sudo[48705]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:09 compute-0 sudo[48873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzzsksqkwaybzcrdwfryisxmihohnvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065289.1900604-235-231332355289496/AnsiballZ_dnf.py'
Nov 25 10:08:09 compute-0 sudo[48873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:09 compute-0 python3.9[48875]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:08:11 compute-0 sudo[48873]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:11 compute-0 sudo[49026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbqsdbkolcmlwaflemyifdqjkjgpjaap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065291.4974613-244-72300977054454/AnsiballZ_dnf.py'
Nov 25 10:08:11 compute-0 sudo[49026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:12 compute-0 python3.9[49028]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:08:23 compute-0 sudo[49026]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:24 compute-0 sudo[49363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhwfbnnfzvevmanofnmjaqlgikghaujt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065304.2129047-253-175484447415818/AnsiballZ_dnf.py'
Nov 25 10:08:24 compute-0 sudo[49363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:24 compute-0 python3.9[49365]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:08:26 compute-0 sudo[49363]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:26 compute-0 sudo[49519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kesbqczeyhhcdzyinalttwafbjlbjzum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065306.4221027-264-85221659162540/AnsiballZ_file.py'
Nov 25 10:08:26 compute-0 sudo[49519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:26 compute-0 python3.9[49521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:08:26 compute-0 sudo[49519]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:27 compute-0 sudo[49694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qphxshrhewkwwceefoolhxvwirlkacod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065307.1735954-272-184872541933676/AnsiballZ_stat.py'
Nov 25 10:08:27 compute-0 sudo[49694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:27 compute-0 python3.9[49696]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:08:27 compute-0 sudo[49694]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:28 compute-0 sudo[49817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yreceroifxidhhqdftxtwnlbknrolvif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065307.1735954-272-184872541933676/AnsiballZ_copy.py'
Nov 25 10:08:28 compute-0 sudo[49817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:28 compute-0 python3.9[49819]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764065307.1735954-272-184872541933676/.source.json _original_basename=.etoruoz0 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:08:28 compute-0 sudo[49817]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:29 compute-0 sudo[49969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhiaofgxugmuecoxfnwgorrkuuewlepr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065308.7964363-290-25881733119179/AnsiballZ_podman_image.py'
Nov 25 10:08:29 compute-0 sudo[49969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:29 compute-0 python3.9[49971]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat569720215-lower\x2dmapped.mount: Deactivated successfully.
Nov 25 10:08:35 compute-0 podman[49983]: 2025-11-25 10:08:35.444887846 +0000 UTC m=+5.809709458 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 10:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:35 compute-0 sudo[49969]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:36 compute-0 sudo[50279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yptdaxfsjrbqywpikcdqwipakpozrblm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065316.112449-301-216220070037721/AnsiballZ_podman_image.py'
Nov 25 10:08:36 compute-0 sudo[50279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:36 compute-0 python3.9[50281]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:46 compute-0 podman[50293]: 2025-11-25 10:08:46.729072908 +0000 UTC m=+10.027076346 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 10:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:47 compute-0 sudo[50279]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:47 compute-0 sudo[50590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxclzsiaziksydhdxtffqabwqyefeqkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065327.5594487-311-131404881741421/AnsiballZ_podman_image.py'
Nov 25 10:08:47 compute-0 sudo[50590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:48 compute-0 python3.9[50592]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:08:48 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:49 compute-0 podman[50605]: 2025-11-25 10:08:49.466832904 +0000 UTC m=+1.368557989 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 10:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:08:49 compute-0 sudo[50590]: pam_unix(sudo:session): session closed for user root
Nov 25 10:08:50 compute-0 sudo[50839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zipyebnkxzjnbgdabyambgivstybrice ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065330.021307-320-258719171845770/AnsiballZ_podman_image.py'
Nov 25 10:08:50 compute-0 sudo[50839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:08:50 compute-0 python3.9[50841]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:08:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:00 compute-0 podman[50853]: 2025-11-25 10:09:00.239385626 +0000 UTC m=+9.514343725 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 10:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:00 compute-0 sudo[50839]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:01 compute-0 sudo[51162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczgyczxsugqazvtftprlohhgrlnfahw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065341.164876-331-151711151991717/AnsiballZ_podman_image.py'
Nov 25 10:09:01 compute-0 sudo[51162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:01 compute-0 python3.9[51164]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:19 compute-0 podman[51178]: 2025-11-25 10:09:19.604801272 +0000 UTC m=+17.853658693 image pull 62d0cdbd80511c7b16dc1b12830c26126f29d8961a194546e50bdb4d0a16aab7 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 25 10:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:19 compute-0 sudo[51162]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:20 compute-0 sudo[51494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcvafimujxhyrffkqdpjltvlmohctrjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065359.953694-331-271448587525062/AnsiballZ_podman_image.py'
Nov 25 10:09:20 compute-0 sudo[51494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:20 compute-0 python3.9[51496]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:09:22 compute-0 podman[51509]: 2025-11-25 10:09:22.359004035 +0000 UTC m=+1.873484670 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 25 10:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:22 compute-0 sudo[51494]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:23 compute-0 sudo[51781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikdhirhqntkodcltekrgptbqfmtpwxrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065362.9494925-347-163178526090229/AnsiballZ_podman_image.py'
Nov 25 10:09:23 compute-0 sudo[51781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:23 compute-0 python3.9[51783]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:09:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:26 compute-0 podman[51795]: 2025-11-25 10:09:26.197375617 +0000 UTC m=+2.677098296 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 25 10:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:26 compute-0 sudo[51781]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:26 compute-0 sudo[52049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpwmypytnsjxvacjptolxupxgaunjljr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065366.5291598-347-92086356943546/AnsiballZ_podman_image.py'
Nov 25 10:09:26 compute-0 sudo[52049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:27 compute-0 python3.9[52051]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 25 10:09:33 compute-0 podman[52064]: 2025-11-25 10:09:33.186052473 +0000 UTC m=+6.105089961 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 25 10:09:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:09:33 compute-0 sudo[52049]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:33 compute-0 sshd-session[45129]: Connection closed by 192.168.122.30 port 52902
Nov 25 10:09:33 compute-0 sshd-session[45126]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:09:33 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 25 10:09:33 compute-0 systemd[1]: session-11.scope: Consumed 2min 35.030s CPU time.
Nov 25 10:09:33 compute-0 systemd-logind[822]: Session 11 logged out. Waiting for processes to exit.
Nov 25 10:09:33 compute-0 systemd-logind[822]: Removed session 11.
Nov 25 10:09:39 compute-0 sshd-session[52310]: Accepted publickey for zuul from 192.168.122.30 port 54466 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:09:39 compute-0 systemd-logind[822]: New session 12 of user zuul.
Nov 25 10:09:39 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 25 10:09:39 compute-0 sshd-session[52310]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:09:40 compute-0 python3.9[52463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:09:41 compute-0 sudo[52617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vusgipzclmwqcgeplhbyhettpwjkfmdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065380.6999443-36-47244961156686/AnsiballZ_getent.py'
Nov 25 10:09:41 compute-0 sudo[52617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:41 compute-0 python3.9[52619]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 25 10:09:41 compute-0 sudo[52617]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:42 compute-0 sudo[52770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naqqanguhtqmnlvnzireswfuesdkofzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065381.572893-44-124055298941136/AnsiballZ_group.py'
Nov 25 10:09:42 compute-0 sudo[52770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:42 compute-0 python3.9[52772]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:09:42 compute-0 groupadd[52773]: group added to /etc/group: name=openvswitch, GID=42476
Nov 25 10:09:42 compute-0 groupadd[52773]: group added to /etc/gshadow: name=openvswitch
Nov 25 10:09:42 compute-0 groupadd[52773]: new group: name=openvswitch, GID=42476
Nov 25 10:09:42 compute-0 sudo[52770]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:43 compute-0 sudo[52928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vptcugfcvqhnwxfdfunqavfbxjqdyxjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065382.6029112-52-36016851541900/AnsiballZ_user.py'
Nov 25 10:09:43 compute-0 sudo[52928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:43 compute-0 python3.9[52930]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 10:09:43 compute-0 useradd[52932]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 10:09:43 compute-0 useradd[52932]: add 'openvswitch' to group 'hugetlbfs'
Nov 25 10:09:43 compute-0 useradd[52932]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 25 10:09:43 compute-0 sudo[52928]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:44 compute-0 sudo[53088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oljdgdxdqlmovyqpjzcnnmmmqnffdkfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065383.7281744-62-159297818896938/AnsiballZ_setup.py'
Nov 25 10:09:44 compute-0 sudo[53088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:44 compute-0 python3.9[53090]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:09:44 compute-0 sudo[53088]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:45 compute-0 sudo[53172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruahsymghzfigvjksxpnzjtwcfztiqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065383.7281744-62-159297818896938/AnsiballZ_dnf.py'
Nov 25 10:09:45 compute-0 sudo[53172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:45 compute-0 python3.9[53174]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:09:46 compute-0 sudo[53172]: pam_unix(sudo:session): session closed for user root
Nov 25 10:09:47 compute-0 sudo[53333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-refhuxvfitszhwxbxsyqcgkmsdqppcnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065387.1462574-76-195476950665107/AnsiballZ_dnf.py'
Nov 25 10:09:47 compute-0 sudo[53333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:09:47 compute-0 python3.9[53335]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:10:01 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Nov 25 10:10:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:10:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:10:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:10:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:10:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:10:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:10:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:10:01 compute-0 groupadd[53359]: group added to /etc/group: name=unbound, GID=993
Nov 25 10:10:01 compute-0 groupadd[53359]: group added to /etc/gshadow: name=unbound
Nov 25 10:10:01 compute-0 groupadd[53359]: new group: name=unbound, GID=993
Nov 25 10:10:01 compute-0 useradd[53366]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 25 10:10:01 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 25 10:10:01 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 25 10:10:03 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:10:03 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:10:03 compute-0 systemd[1]: Reloading.
Nov 25 10:10:03 compute-0 systemd-rc-local-generator[53859]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:10:03 compute-0 systemd-sysv-generator[53865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:10:03 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:10:04 compute-0 sudo[53333]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:04 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:10:04 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:10:04 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.058s CPU time.
Nov 25 10:10:04 compute-0 systemd[1]: run-r6f84a57c7b784a3ab275589b554da479.service: Deactivated successfully.
Nov 25 10:10:04 compute-0 sudo[54432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysziddtpipbkorzbrnojqyydluljfstw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065404.2775555-84-223307449743608/AnsiballZ_systemd.py'
Nov 25 10:10:04 compute-0 sudo[54432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:05 compute-0 python3.9[54434]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:10:05 compute-0 systemd[1]: Reloading.
Nov 25 10:10:05 compute-0 systemd-rc-local-generator[54465]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:10:05 compute-0 systemd-sysv-generator[54470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:10:05 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 25 10:10:05 compute-0 chown[54476]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 25 10:10:05 compute-0 ovs-ctl[54481]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 25 10:10:05 compute-0 ovs-ctl[54481]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 25 10:10:05 compute-0 ovs-ctl[54481]: Starting ovsdb-server [  OK  ]
Nov 25 10:10:05 compute-0 ovs-vsctl[54530]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 25 10:10:05 compute-0 ovs-vsctl[54549]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"3fcb3423-a4d5-4f72-950c-307893e4a985\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 25 10:10:05 compute-0 ovs-ctl[54481]: Configuring Open vSwitch system IDs [  OK  ]
Nov 25 10:10:05 compute-0 ovs-ctl[54481]: Enabling remote OVSDB managers [  OK  ]
Nov 25 10:10:05 compute-0 ovs-vsctl[54555]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 10:10:05 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 25 10:10:05 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 25 10:10:06 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 25 10:10:06 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 25 10:10:06 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 25 10:10:06 compute-0 ovs-ctl[54599]: Inserting openvswitch module [  OK  ]
Nov 25 10:10:06 compute-0 ovs-ctl[54568]: Starting ovs-vswitchd [  OK  ]
Nov 25 10:10:06 compute-0 ovs-vsctl[54619]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 25 10:10:06 compute-0 ovs-ctl[54568]: Enabling remote OVSDB managers [  OK  ]
Nov 25 10:10:06 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 25 10:10:06 compute-0 systemd[1]: Starting Open vSwitch...
Nov 25 10:10:06 compute-0 systemd[1]: Finished Open vSwitch.
Nov 25 10:10:06 compute-0 sudo[54432]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:07 compute-0 python3.9[54771]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:10:08 compute-0 sudo[54921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqlshnsmtmwrwmtqgyranqmqlokzcwqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065407.51035-102-276530230064889/AnsiballZ_sefcontext.py'
Nov 25 10:10:08 compute-0 sudo[54921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:08 compute-0 python3.9[54923]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 25 10:10:09 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Nov 25 10:10:09 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:10:09 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:10:09 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:10:09 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:10:09 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:10:09 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:10:09 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:10:09 compute-0 sudo[54921]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:10 compute-0 python3.9[55078]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:10:11 compute-0 sudo[55234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypwonqvdjldomoialcbnzwdbagttutt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065411.0289354-120-199323248019863/AnsiballZ_dnf.py'
Nov 25 10:10:11 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 25 10:10:11 compute-0 sudo[55234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:11 compute-0 python3.9[55236]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:10:12 compute-0 sudo[55234]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:13 compute-0 sudo[55387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwybwlmwgtoodfyaluvgvagrpzaskzll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065413.0133364-128-189715390602346/AnsiballZ_command.py'
Nov 25 10:10:13 compute-0 sudo[55387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:13 compute-0 python3.9[55389]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:10:14 compute-0 sudo[55387]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:15 compute-0 sudo[55674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxdbjrzohljzwxsvkdywlhanhrshhxni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065414.692098-136-60998793977987/AnsiballZ_file.py'
Nov 25 10:10:15 compute-0 sudo[55674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:15 compute-0 python3.9[55676]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 10:10:15 compute-0 sudo[55674]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:16 compute-0 python3.9[55826]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:10:16 compute-0 sudo[55978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqflknbxbamgzhvqzthwyxdjpyuonvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065416.51541-152-215574861074729/AnsiballZ_dnf.py'
Nov 25 10:10:16 compute-0 sudo[55978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:17 compute-0 python3.9[55980]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:10:19 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:10:19 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:10:19 compute-0 systemd[1]: Reloading.
Nov 25 10:10:19 compute-0 systemd-rc-local-generator[56018]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:10:19 compute-0 systemd-sysv-generator[56022]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:10:19 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:10:19 compute-0 sudo[55978]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:20 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:10:20 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:10:20 compute-0 systemd[1]: run-rbd17bdb25f1f453cbfc2662a57183ffa.service: Deactivated successfully.
Nov 25 10:10:20 compute-0 sudo[56297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsrsnhccenlucvmoejzvqqewlcjmpftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065420.0241098-160-266327999277630/AnsiballZ_systemd.py'
Nov 25 10:10:20 compute-0 sudo[56297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:20 compute-0 python3.9[56299]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:10:20 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 25 10:10:20 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 25 10:10:20 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 25 10:10:20 compute-0 systemd[1]: Stopping Network Manager...
Nov 25 10:10:20 compute-0 NetworkManager[7199]: <info>  [1764065420.7108] caught SIGTERM, shutting down normally.
Nov 25 10:10:20 compute-0 NetworkManager[7199]: <info>  [1764065420.7120] dhcp4 (eth0): canceled DHCP transaction
Nov 25 10:10:20 compute-0 NetworkManager[7199]: <info>  [1764065420.7121] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:10:20 compute-0 NetworkManager[7199]: <info>  [1764065420.7121] dhcp4 (eth0): state changed no lease
Nov 25 10:10:20 compute-0 NetworkManager[7199]: <info>  [1764065420.7122] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 10:10:20 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:10:20 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:10:20 compute-0 NetworkManager[7199]: <info>  [1764065420.7594] exiting (success)
Nov 25 10:10:20 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 25 10:10:20 compute-0 systemd[1]: Stopped Network Manager.
Nov 25 10:10:20 compute-0 systemd[1]: NetworkManager.service: Consumed 15.282s CPU time, 4.1M memory peak, read 0B from disk, written 29.0K to disk.
Nov 25 10:10:20 compute-0 systemd[1]: Starting Network Manager...
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.8232] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e06b8e8c-0c4c-4141-b318-1ef0fbbec151)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.8233] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.8281] manager[0x55b39ace0090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 25 10:10:20 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 10:10:20 compute-0 systemd[1]: Started Hostname Service.
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9459] hostname: hostname: using hostnamed
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9460] hostname: static hostname changed from (none) to "compute-0"
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9472] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9481] manager[0x55b39ace0090]: rfkill: Wi-Fi hardware radio set enabled
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9482] manager[0x55b39ace0090]: rfkill: WWAN hardware radio set enabled
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9523] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9540] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9542] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9543] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9544] manager: Networking is enabled by state file
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9552] settings: Loaded settings plugin: keyfile (internal)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9558] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9612] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9631] dhcp: init: Using DHCP client 'internal'
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9635] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9646] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9658] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9671] device (lo): Activation: starting connection 'lo' (14c424d9-56c8-4f39-a02e-7c90be18328a)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9683] device (eth0): carrier: link connected
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9690] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9700] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9701] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9716] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9730] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9740] device (eth1): carrier: link connected
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9747] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9760] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (c1249576-eed4-542c-bfdf-2a49ef515b96) (indicated)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9761] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9772] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9788] device (eth1): Activation: starting connection 'ci-private-network' (c1249576-eed4-542c-bfdf-2a49ef515b96)
Nov 25 10:10:20 compute-0 systemd[1]: Started Network Manager.
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9799] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9825] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9834] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9841] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9848] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9854] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9863] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9873] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9881] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9900] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9910] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9941] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 25 10:10:20 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9959] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9970] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9973] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9983] device (lo): Activation: successful, device activated.
Nov 25 10:10:20 compute-0 NetworkManager[56317]: <info>  [1764065420.9992] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0003] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0216] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0228] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0230] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0234] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0239] device (eth1): Activation: successful, device activated.
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0251] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0254] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0262] manager: NetworkManager state is now CONNECTED_SITE
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0267] device (eth0): Activation: successful, device activated.
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0276] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 25 10:10:21 compute-0 sudo[56297]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:21 compute-0 NetworkManager[56317]: <info>  [1764065421.0596] manager: startup complete
Nov 25 10:10:21 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 25 10:10:21 compute-0 sudo[56523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jltdmsafdvnujfswbffvwfqshktmofkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065421.231413-168-252557111471440/AnsiballZ_dnf.py'
Nov 25 10:10:21 compute-0 sudo[56523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:21 compute-0 python3.9[56525]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:10:31 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:10:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:10:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:10:32 compute-0 systemd[1]: Reloading.
Nov 25 10:10:32 compute-0 systemd-rc-local-generator[56577]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:10:32 compute-0 systemd-sysv-generator[56580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:10:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:10:35 compute-0 sudo[56523]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:36 compute-0 sudo[56980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdyoyardtusetcirkbpqtpvdzedxebeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065436.3151011-180-36927831573924/AnsiballZ_stat.py'
Nov 25 10:10:36 compute-0 sudo[56980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:36 compute-0 python3.9[56982]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:10:36 compute-0 sudo[56980]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:37 compute-0 sudo[57132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cakzvuesmotbompykgeovzxaoywtduhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065437.1129937-189-147228847058043/AnsiballZ_ini_file.py'
Nov 25 10:10:37 compute-0 sudo[57132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:37 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:10:37 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:10:37 compute-0 systemd[1]: run-r9de719337dff4ab49496f4d7a4e8f517.service: Deactivated successfully.
Nov 25 10:10:37 compute-0 python3.9[57134]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:37 compute-0 sudo[57132]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:38 compute-0 sudo[57287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euyjfykuejhjczfbtgcoryjvkfwxalmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065438.0925522-199-92555313037793/AnsiballZ_ini_file.py'
Nov 25 10:10:38 compute-0 sudo[57287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:38 compute-0 python3.9[57289]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:38 compute-0 sudo[57287]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:39 compute-0 sudo[57439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeeevhsbsmmxkcyxvstoktqwgjtnjsqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065438.9000058-199-154570398937734/AnsiballZ_ini_file.py'
Nov 25 10:10:39 compute-0 sudo[57439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:39 compute-0 python3.9[57441]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:39 compute-0 sudo[57439]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:40 compute-0 sudo[57591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuhjwbfsjcfukqfnbwpbwenmmatgfqgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065439.68004-214-20439449814794/AnsiballZ_ini_file.py'
Nov 25 10:10:40 compute-0 sudo[57591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:40 compute-0 python3.9[57593]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:40 compute-0 sudo[57591]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:40 compute-0 sudo[57743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyqbwvzrnpupbwvvziqcudgvtqluqcen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065440.469496-214-145144249790693/AnsiballZ_ini_file.py'
Nov 25 10:10:40 compute-0 sudo[57743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:40 compute-0 python3.9[57745]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:41 compute-0 sudo[57743]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:41 compute-0 sudo[57895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loyecnqltdsljgebnhhlqssngtsoiboe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065441.221284-229-133831951166677/AnsiballZ_stat.py'
Nov 25 10:10:41 compute-0 sudo[57895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:41 compute-0 python3.9[57897]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:10:41 compute-0 sudo[57895]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:42 compute-0 sudo[58018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azygtpncsecwvviuhiovbgzlvprgyouq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065441.221284-229-133831951166677/AnsiballZ_copy.py'
Nov 25 10:10:42 compute-0 sudo[58018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:42 compute-0 python3.9[58020]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065441.221284-229-133831951166677/.source _original_basename=.er_n4bat follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:42 compute-0 sudo[58018]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:43 compute-0 sudo[58170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ongqfevhcurfrzisiduxgqlhgmlpznbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065442.6961129-244-30519873755561/AnsiballZ_file.py'
Nov 25 10:10:43 compute-0 sudo[58170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:43 compute-0 python3.9[58172]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:43 compute-0 sudo[58170]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:43 compute-0 sudo[58322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgqbjtivtjpnfjvagvskgkxnyczzhkal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065443.4153776-252-65565341209550/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 25 10:10:43 compute-0 sudo[58322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:44 compute-0 python3.9[58324]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 25 10:10:44 compute-0 sudo[58322]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:44 compute-0 sudo[58474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqffehxzqgfgyfxreeqdbbdbsggiywi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065444.293314-261-246800686722001/AnsiballZ_file.py'
Nov 25 10:10:44 compute-0 sudo[58474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:44 compute-0 python3.9[58476]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:44 compute-0 sudo[58474]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:45 compute-0 sudo[58626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpbmfxjwtgxsfrvzgffohauvturuqeew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065445.0677075-271-13504644738795/AnsiballZ_stat.py'
Nov 25 10:10:45 compute-0 sudo[58626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:45 compute-0 sudo[58626]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:45 compute-0 sudo[58749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bihvzcdveqkrflonqbzlwzckvckgzvxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065445.0677075-271-13504644738795/AnsiballZ_copy.py'
Nov 25 10:10:45 compute-0 sudo[58749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:46 compute-0 sudo[58749]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:46 compute-0 sudo[58901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qonvrzrjfandilbafixoqhlzssvtvzpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065446.2583945-286-243956594108857/AnsiballZ_slurp.py'
Nov 25 10:10:46 compute-0 sudo[58901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:46 compute-0 python3.9[58903]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 25 10:10:46 compute-0 sudo[58901]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:47 compute-0 sudo[59076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpwswamqklkjhidyuaouxvkvpyyzxvtf ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065447.1372128-295-60212716055594/async_wrapper.py j186139106406 300 /home/zuul/.ansible/tmp/ansible-tmp-1764065447.1372128-295-60212716055594/AnsiballZ_edpm_os_net_config.py _'
Nov 25 10:10:47 compute-0 sudo[59076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:47 compute-0 ansible-async_wrapper.py[59078]: Invoked with j186139106406 300 /home/zuul/.ansible/tmp/ansible-tmp-1764065447.1372128-295-60212716055594/AnsiballZ_edpm_os_net_config.py _
Nov 25 10:10:47 compute-0 ansible-async_wrapper.py[59081]: Starting module and watcher
Nov 25 10:10:47 compute-0 ansible-async_wrapper.py[59081]: Start watching 59082 (300)
Nov 25 10:10:47 compute-0 ansible-async_wrapper.py[59082]: Start module (59082)
Nov 25 10:10:47 compute-0 ansible-async_wrapper.py[59078]: Return async_wrapper task started.
Nov 25 10:10:47 compute-0 sudo[59076]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:48 compute-0 python3.9[59083]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 25 10:10:48 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 25 10:10:48 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 25 10:10:48 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 25 10:10:48 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 25 10:10:48 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.8941] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.8956] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9562] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9564] audit: op="connection-add" uuid="fdb67cfe-9aa5-4b5d-8dbf-3a9deb0cb413" name="br-ex-br" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9585] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9587] audit: op="connection-add" uuid="b9fb75ba-bc3f-4869-bc99-933b567d9023" name="br-ex-port" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9601] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9603] audit: op="connection-add" uuid="752ca92f-d61f-44ef-a6e0-85a746e95b52" name="eth1-port" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9617] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9619] audit: op="connection-add" uuid="627a74d3-d4d7-4686-9b93-8625f1d20f31" name="vlan20-port" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9634] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9636] audit: op="connection-add" uuid="80f6b4f8-749c-4273-8db4-ba0196935f7a" name="vlan21-port" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9650] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9651] audit: op="connection-add" uuid="fe1c2ca6-a1da-4aab-a214-de2952e052e2" name="vlan22-port" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9674] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9707] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9709] audit: op="connection-add" uuid="59ae4838-e69a-4303-8cb3-3c2b4c858db9" name="br-ex-if" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9831] audit: op="connection-update" uuid="c1249576-eed4-542c-bfdf-2a49ef515b96" name="ci-private-network" args="connection.slave-type,connection.port-type,connection.controller,connection.master,connection.timestamp,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.dns,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9859] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9861] audit: op="connection-add" uuid="0ce2cf04-8feb-4158-aa95-39ff2f58d65a" name="vlan20-if" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9888] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9890] audit: op="connection-add" uuid="6ae49df5-7252-4fa4-b750-dbeb56dd9e9c" name="vlan21-if" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9917] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9919] audit: op="connection-add" uuid="19a389be-3b78-437a-9ffc-471b37f0d441" name="vlan22-if" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9934] audit: op="connection-delete" uuid="d4f00ec1-b080-3d7a-ab6b-d6cd50aae30b" name="Wired connection 1" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9953] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9968] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9974] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (fdb67cfe-9aa5-4b5d-8dbf-3a9deb0cb413)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9975] audit: op="connection-activate" uuid="fdb67cfe-9aa5-4b5d-8dbf-3a9deb0cb413" name="br-ex-br" pid=59084 uid=0 result="success"
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9978] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9988] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9995] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (b9fb75ba-bc3f-4869-bc99-933b567d9023)
Nov 25 10:10:49 compute-0 NetworkManager[56317]: <info>  [1764065449.9998] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0007] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0014] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (752ca92f-d61f-44ef-a6e0-85a746e95b52)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0017] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0027] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0033] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (627a74d3-d4d7-4686-9b93-8625f1d20f31)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0036] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0046] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0053] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (80f6b4f8-749c-4273-8db4-ba0196935f7a)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0055] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0063] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0067] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (fe1c2ca6-a1da-4aab-a214-de2952e052e2)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0068] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0070] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0072] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0078] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0082] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0086] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (59ae4838-e69a-4303-8cb3-3c2b4c858db9)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0086] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0089] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0090] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0091] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0092] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0103] device (eth1): disconnecting for new activation request.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0103] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0106] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0107] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0108] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0110] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0115] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0120] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (0ce2cf04-8feb-4158-aa95-39ff2f58d65a)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0121] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0124] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0126] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0128] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0130] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0136] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0141] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (6ae49df5-7252-4fa4-b750-dbeb56dd9e9c)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0142] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0145] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0147] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0148] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0151] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0157] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0162] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (19a389be-3b78-437a-9ffc-471b37f0d441)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0162] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0166] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0168] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0169] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0171] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0190] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=59084 uid=0 result="success"
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0192] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0196] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0200] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0209] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0213] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0219] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0223] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0225] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0235] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0241] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0246] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0247] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0253] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0258] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0262] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0264] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 kernel: Timeout policy base is empty
Nov 25 10:10:50 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0270] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 systemd-udevd[59087]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0276] dhcp4 (eth0): canceled DHCP transaction
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0276] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0276] dhcp4 (eth0): state changed no lease
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0277] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0289] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0293] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59084 uid=0 result="fail" reason="Device is not activated"
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0298] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0338] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0347] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0358] device (eth1): disconnecting for new activation request.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0359] audit: op="connection-activate" uuid="c1249576-eed4-542c-bfdf-2a49ef515b96" name="ci-private-network" pid=59084 uid=0 result="success"
Nov 25 10:10:50 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0393] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0627] device (eth1): Activation: starting connection 'ci-private-network' (c1249576-eed4-542c-bfdf-2a49ef515b96)
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0651] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0657] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0666] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59084 uid=0 result="success"
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0666] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0668] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0669] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0671] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0672] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0674] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0677] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0683] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0687] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0691] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0697] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0701] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0704] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 kernel: br-ex: entered promiscuous mode
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0708] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0711] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0714] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0718] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0722] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0725] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0732] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0734] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0794] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 kernel: vlan22: entered promiscuous mode
Nov 25 10:10:50 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 25 10:10:50 compute-0 systemd-udevd[59088]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0807] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0813] device (eth1): Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0855] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0866] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 kernel: vlan21: entered promiscuous mode
Nov 25 10:10:50 compute-0 systemd-udevd[59089]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0902] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0903] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0908] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 kernel: vlan20: entered promiscuous mode
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0973] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.0990] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1005] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1006] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1012] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1022] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1034] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1048] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1056] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1061] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1068] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1074] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1081] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1084] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1090] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 25 10:10:50 compute-0 NetworkManager[56317]: <info>  [1764065450.1417] dhcp4 (eth0): state changed new lease, address=38.102.83.147
Nov 25 10:10:50 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.2561] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59084 uid=0 result="success"
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.4736] checkpoint[0x55b39acb5950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.4742] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59084 uid=0 result="success"
Nov 25 10:10:51 compute-0 sudo[59418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ratghqxvnwhvqdtdojsheqrdmvkwkijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065451.1562603-295-220471741792584/AnsiballZ_async_status.py'
Nov 25 10:10:51 compute-0 sudo[59418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.7681] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59084 uid=0 result="success"
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.7697] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59084 uid=0 result="success"
Nov 25 10:10:51 compute-0 python3.9[59420]: ansible-ansible.legacy.async_status Invoked with jid=j186139106406.59078 mode=status _async_dir=/root/.ansible_async
Nov 25 10:10:51 compute-0 sudo[59418]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.9513] audit: op="networking-control" arg="global-dns-configuration" pid=59084 uid=0 result="success"
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.9543] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.9582] audit: op="networking-control" arg="global-dns-configuration" pid=59084 uid=0 result="success"
Nov 25 10:10:51 compute-0 NetworkManager[56317]: <info>  [1764065451.9609] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59084 uid=0 result="success"
Nov 25 10:10:52 compute-0 NetworkManager[56317]: <info>  [1764065452.1109] checkpoint[0x55b39acb5a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 25 10:10:52 compute-0 NetworkManager[56317]: <info>  [1764065452.1115] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59084 uid=0 result="success"
Nov 25 10:10:52 compute-0 ansible-async_wrapper.py[59082]: Module complete (59082)
Nov 25 10:10:52 compute-0 ansible-async_wrapper.py[59081]: Done in kid B.
Nov 25 10:10:55 compute-0 sudo[59522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptbtyjfzbbktkgyptgvcxunyxljdccit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065451.1562603-295-220471741792584/AnsiballZ_async_status.py'
Nov 25 10:10:55 compute-0 sudo[59522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:55 compute-0 python3.9[59525]: ansible-ansible.legacy.async_status Invoked with jid=j186139106406.59078 mode=status _async_dir=/root/.ansible_async
Nov 25 10:10:55 compute-0 sudo[59522]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:55 compute-0 sudo[59622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnurxflyijvfwywgncvyeyavtpawecuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065451.1562603-295-220471741792584/AnsiballZ_async_status.py'
Nov 25 10:10:55 compute-0 sudo[59622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:55 compute-0 python3.9[59624]: ansible-ansible.legacy.async_status Invoked with jid=j186139106406.59078 mode=cleanup _async_dir=/root/.ansible_async
Nov 25 10:10:55 compute-0 sudo[59622]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:56 compute-0 sudo[59774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odlxuequmjbepqudaxqngzugxvmpzcry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065456.0494308-322-211762223932389/AnsiballZ_stat.py'
Nov 25 10:10:56 compute-0 sudo[59774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:56 compute-0 python3.9[59776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:10:56 compute-0 sudo[59774]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:56 compute-0 sudo[59897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdjeaqnyzumdszpmcvtukgsdkpaaiaug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065456.0494308-322-211762223932389/AnsiballZ_copy.py'
Nov 25 10:10:56 compute-0 sudo[59897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:57 compute-0 python3.9[59899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065456.0494308-322-211762223932389/.source.returncode _original_basename=.sno5hi62 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:57 compute-0 sudo[59897]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:57 compute-0 sudo[60049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxevtchmzrijtilrnpvtcbhwndkmtshg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065457.335663-338-205944748470626/AnsiballZ_stat.py'
Nov 25 10:10:57 compute-0 sudo[60049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:57 compute-0 python3.9[60051]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:10:57 compute-0 sudo[60049]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:58 compute-0 sudo[60172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwutmvjupplwjowduyhqwaryzyucnly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065457.335663-338-205944748470626/AnsiballZ_copy.py'
Nov 25 10:10:58 compute-0 sudo[60172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:58 compute-0 python3.9[60174]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065457.335663-338-205944748470626/.source.cfg _original_basename=.54c7z46_ follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:10:58 compute-0 sudo[60172]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:59 compute-0 sudo[60325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkjuaenzyryynuqyhnacfxzedkikteiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065458.6576574-353-76547878654870/AnsiballZ_systemd.py'
Nov 25 10:10:59 compute-0 sudo[60325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:10:59 compute-0 python3.9[60327]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:10:59 compute-0 systemd[1]: Reloading Network Manager...
Nov 25 10:10:59 compute-0 NetworkManager[56317]: <info>  [1764065459.4016] audit: op="reload" arg="0" pid=60331 uid=0 result="success"
Nov 25 10:10:59 compute-0 NetworkManager[56317]: <info>  [1764065459.4032] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 25 10:10:59 compute-0 systemd[1]: Reloaded Network Manager.
Nov 25 10:10:59 compute-0 sudo[60325]: pam_unix(sudo:session): session closed for user root
Nov 25 10:10:59 compute-0 sshd-session[52313]: Connection closed by 192.168.122.30 port 54466
Nov 25 10:10:59 compute-0 sshd-session[52310]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:10:59 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 25 10:10:59 compute-0 systemd[1]: session-12.scope: Consumed 55.045s CPU time.
Nov 25 10:10:59 compute-0 systemd-logind[822]: Session 12 logged out. Waiting for processes to exit.
Nov 25 10:10:59 compute-0 systemd-logind[822]: Removed session 12.
Nov 25 10:11:05 compute-0 sshd-session[60362]: Accepted publickey for zuul from 192.168.122.30 port 42506 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:11:05 compute-0 systemd-logind[822]: New session 13 of user zuul.
Nov 25 10:11:05 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 25 10:11:05 compute-0 sshd-session[60362]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:11:07 compute-0 python3.9[60515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:11:08 compute-0 python3.9[60669]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:11:09 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 25 10:11:09 compute-0 python3.9[60859]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:11:10 compute-0 sshd-session[60365]: Connection closed by 192.168.122.30 port 42506
Nov 25 10:11:10 compute-0 sshd-session[60362]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:11:10 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 25 10:11:10 compute-0 systemd[1]: session-13.scope: Consumed 2.799s CPU time.
Nov 25 10:11:10 compute-0 systemd-logind[822]: Session 13 logged out. Waiting for processes to exit.
Nov 25 10:11:10 compute-0 systemd-logind[822]: Removed session 13.
Nov 25 10:11:12 compute-0 sshd-session[60886]: Connection closed by authenticating user root 171.244.51.45 port 33134 [preauth]
Nov 25 10:11:16 compute-0 sshd-session[60890]: Accepted publickey for zuul from 192.168.122.30 port 54052 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:11:16 compute-0 systemd-logind[822]: New session 14 of user zuul.
Nov 25 10:11:16 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 25 10:11:16 compute-0 sshd-session[60890]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:11:17 compute-0 python3.9[61043]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:11:18 compute-0 python3.9[61198]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:11:19 compute-0 sudo[61352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhtjenoojxbsfnfhcvgziavwiicpsgiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065478.9137619-40-187199546751265/AnsiballZ_setup.py'
Nov 25 10:11:19 compute-0 sudo[61352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:19 compute-0 python3.9[61354]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:11:19 compute-0 sudo[61352]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:20 compute-0 sudo[61436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyipscggphkxdzzqkopydwmqmiawwpdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065478.9137619-40-187199546751265/AnsiballZ_dnf.py'
Nov 25 10:11:20 compute-0 sudo[61436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:20 compute-0 python3.9[61438]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:11:21 compute-0 sudo[61436]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:22 compute-0 sudo[61590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swzfinjeubbzzmamdkvqcsmhfywoddpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065481.9680614-52-77378852301715/AnsiballZ_setup.py'
Nov 25 10:11:22 compute-0 sudo[61590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:22 compute-0 python3.9[61592]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:11:22 compute-0 sudo[61590]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:23 compute-0 sudo[61781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ietpymtwmpznvvvkwqombjakhdjiytfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065483.2662997-63-71769713188333/AnsiballZ_file.py'
Nov 25 10:11:23 compute-0 sudo[61781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:23 compute-0 python3.9[61783]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:11:23 compute-0 sudo[61781]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:24 compute-0 sudo[61933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijqzmkjwmkbwwqpbxxygwonusquqhsax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065484.1633432-71-49517107618066/AnsiballZ_command.py'
Nov 25 10:11:24 compute-0 sudo[61933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:24 compute-0 python3.9[61935]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:11:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:11:24 compute-0 sudo[61933]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:25 compute-0 sudo[62096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgqbqzkatkrowljdlhxcefcpkgsspish ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065485.2006319-79-158571120760333/AnsiballZ_stat.py'
Nov 25 10:11:25 compute-0 sudo[62096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:25 compute-0 python3.9[62098]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:11:25 compute-0 sudo[62096]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:26 compute-0 sudo[62174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iozwjulzqfwimxorzgkuablyztadjaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065485.2006319-79-158571120760333/AnsiballZ_file.py'
Nov 25 10:11:26 compute-0 sudo[62174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:26 compute-0 python3.9[62176]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:11:26 compute-0 sudo[62174]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:26 compute-0 sudo[62326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oueuntcacqqagbkojamcfarfoyyljfot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065486.6263354-91-59581206365169/AnsiballZ_stat.py'
Nov 25 10:11:26 compute-0 sudo[62326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:27 compute-0 python3.9[62328]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:11:27 compute-0 sudo[62326]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:27 compute-0 sudo[62404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eafoyfslczpedduwajzijwgqceiacjic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065486.6263354-91-59581206365169/AnsiballZ_file.py'
Nov 25 10:11:27 compute-0 sudo[62404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:27 compute-0 python3.9[62406]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:11:27 compute-0 sudo[62404]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:28 compute-0 sudo[62556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnuquryrfmciilafespqvclhfrdckjdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065487.9769328-104-63612174866861/AnsiballZ_ini_file.py'
Nov 25 10:11:28 compute-0 sudo[62556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:28 compute-0 python3.9[62558]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:11:28 compute-0 sudo[62556]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:29 compute-0 sudo[62708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvkyubbxjpbkorguialmdyconztdxvxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065488.8856428-104-217455739914150/AnsiballZ_ini_file.py'
Nov 25 10:11:29 compute-0 sudo[62708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:29 compute-0 python3.9[62710]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:11:29 compute-0 sudo[62708]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:29 compute-0 sudo[62860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylgjydpcqjvufvfhmdgiiimjjvawjntx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065489.5677445-104-172393239318820/AnsiballZ_ini_file.py'
Nov 25 10:11:29 compute-0 sudo[62860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:30 compute-0 python3.9[62862]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:11:30 compute-0 sudo[62860]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:30 compute-0 sudo[63012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrrojmjltkruvjwbuliumodkyidhkfto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065490.21-104-192037611476494/AnsiballZ_ini_file.py'
Nov 25 10:11:30 compute-0 sudo[63012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:30 compute-0 python3.9[63014]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:11:30 compute-0 sudo[63012]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:31 compute-0 sudo[63164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taodypsnlqajzxqotmuhqnbtgglwmdqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065490.954391-135-25052777941333/AnsiballZ_dnf.py'
Nov 25 10:11:31 compute-0 sudo[63164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:31 compute-0 python3.9[63166]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:11:32 compute-0 sudo[63164]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:33 compute-0 sudo[63317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkmxsuyouqqisqlesruyfskipzwesufi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065493.2072165-146-262791421424724/AnsiballZ_setup.py'
Nov 25 10:11:33 compute-0 sudo[63317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:33 compute-0 python3.9[63319]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:11:33 compute-0 sudo[63317]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:34 compute-0 sudo[63471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whmxoillqisyaubndqxafeblpcaywtso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065494.0892863-154-179034723499007/AnsiballZ_stat.py'
Nov 25 10:11:34 compute-0 sudo[63471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:34 compute-0 python3.9[63473]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:11:34 compute-0 sudo[63471]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:35 compute-0 sudo[63623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnqtqpqqmlsbgslwgyxiofuoajbcvgad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065494.819977-163-218929312038331/AnsiballZ_stat.py'
Nov 25 10:11:35 compute-0 sudo[63623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:35 compute-0 python3.9[63625]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:11:35 compute-0 sudo[63623]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:35 compute-0 sudo[63775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccvbyspenltflrtsbbroydzolxizziuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065495.5454502-173-43667134137741/AnsiballZ_command.py'
Nov 25 10:11:35 compute-0 sudo[63775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:35 compute-0 python3.9[63777]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:11:36 compute-0 sudo[63775]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:36 compute-0 sudo[63928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnzxlqstedqvdryactltlgdhwpwboupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065496.2835772-183-36295718437972/AnsiballZ_service_facts.py'
Nov 25 10:11:36 compute-0 sudo[63928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:36 compute-0 python3.9[63930]: ansible-service_facts Invoked
Nov 25 10:11:37 compute-0 network[63947]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:11:37 compute-0 network[63948]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:11:37 compute-0 network[63949]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:11:40 compute-0 sudo[63928]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:41 compute-0 sudo[64232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvllzznjscufeiwuazgnomtkcuvoaiur ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764065500.922447-198-78843668450051/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764065500.922447-198-78843668450051/args'
Nov 25 10:11:41 compute-0 sudo[64232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:41 compute-0 sudo[64232]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:42 compute-0 sudo[64399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjvacwnuksskwbopoibnnrzlvjthfkgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065501.7429087-209-108844598677893/AnsiballZ_dnf.py'
Nov 25 10:11:42 compute-0 sudo[64399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:42 compute-0 python3.9[64401]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:11:43 compute-0 sudo[64399]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:44 compute-0 sudo[64552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtzssdsxtqqvrrrvikeyjkhhrhdkgpcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065503.99283-222-280750186087713/AnsiballZ_package_facts.py'
Nov 25 10:11:44 compute-0 sudo[64552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:44 compute-0 python3.9[64554]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 25 10:11:45 compute-0 sudo[64552]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:46 compute-0 sudo[64704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjzwhsuxuatpxtfvnzntovwtpdgwprke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065505.670979-232-215040722723046/AnsiballZ_stat.py'
Nov 25 10:11:46 compute-0 sudo[64704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:46 compute-0 python3.9[64706]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:11:46 compute-0 sudo[64704]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:46 compute-0 sudo[64829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxgqobqzzhifonsyvjrjgshrswhzqsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065505.670979-232-215040722723046/AnsiballZ_copy.py'
Nov 25 10:11:46 compute-0 sudo[64829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:47 compute-0 python3.9[64831]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065505.670979-232-215040722723046/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:11:47 compute-0 sudo[64829]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:47 compute-0 sudo[64983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luqgmyhuukuxdaqacuwhiosuyrinhqkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065507.3704193-247-131397853798940/AnsiballZ_stat.py'
Nov 25 10:11:47 compute-0 sudo[64983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:47 compute-0 python3.9[64985]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:11:47 compute-0 sudo[64983]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:48 compute-0 sudo[65108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crepszatyvscmwodjtzuhqcaietfdvqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065507.3704193-247-131397853798940/AnsiballZ_copy.py'
Nov 25 10:11:48 compute-0 sudo[65108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:48 compute-0 python3.9[65110]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065507.3704193-247-131397853798940/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:11:48 compute-0 sudo[65108]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:49 compute-0 sudo[65262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxblnawmpavppzcaubafmqsjsjipdiav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065508.9917238-268-197101291239918/AnsiballZ_lineinfile.py'
Nov 25 10:11:49 compute-0 sudo[65262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:49 compute-0 python3.9[65264]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:11:49 compute-0 sudo[65262]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:50 compute-0 sudo[65416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrzmcberydsovlxrefafpsfjcehyauqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065510.3565822-283-50993565803561/AnsiballZ_setup.py'
Nov 25 10:11:50 compute-0 sudo[65416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:50 compute-0 python3.9[65418]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:11:51 compute-0 sudo[65416]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:51 compute-0 sudo[65500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwkkwtwosheornzuaxvuwvszvkiookeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065510.3565822-283-50993565803561/AnsiballZ_systemd.py'
Nov 25 10:11:51 compute-0 sudo[65500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:52 compute-0 python3.9[65502]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:11:52 compute-0 sudo[65500]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:53 compute-0 sudo[65654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmulxmcvppezawhtzcjubbezcxhsjbib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065512.7379496-299-54893886522643/AnsiballZ_setup.py'
Nov 25 10:11:53 compute-0 sudo[65654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:53 compute-0 python3.9[65656]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:11:53 compute-0 sudo[65654]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:53 compute-0 sudo[65738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxmnzmswdqraypbetfnaptnjetljuyaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065512.7379496-299-54893886522643/AnsiballZ_systemd.py'
Nov 25 10:11:53 compute-0 sudo[65738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:11:54 compute-0 python3.9[65740]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:11:54 compute-0 chronyd[831]: chronyd exiting
Nov 25 10:11:54 compute-0 systemd[1]: Stopping NTP client/server...
Nov 25 10:11:54 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 25 10:11:54 compute-0 systemd[1]: Stopped NTP client/server.
Nov 25 10:11:54 compute-0 systemd[1]: Starting NTP client/server...
Nov 25 10:11:54 compute-0 chronyd[65748]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 25 10:11:54 compute-0 chronyd[65748]: Frequency -28.782 +/- 0.469 ppm read from /var/lib/chrony/drift
Nov 25 10:11:54 compute-0 chronyd[65748]: Loaded seccomp filter (level 2)
Nov 25 10:11:54 compute-0 systemd[1]: Started NTP client/server.
Nov 25 10:11:54 compute-0 sudo[65738]: pam_unix(sudo:session): session closed for user root
Nov 25 10:11:54 compute-0 sshd-session[60893]: Connection closed by 192.168.122.30 port 54052
Nov 25 10:11:54 compute-0 sshd-session[60890]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:11:54 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 25 10:11:54 compute-0 systemd[1]: session-14.scope: Consumed 28.597s CPU time.
Nov 25 10:11:54 compute-0 systemd-logind[822]: Session 14 logged out. Waiting for processes to exit.
Nov 25 10:11:54 compute-0 systemd-logind[822]: Removed session 14.
Nov 25 10:12:00 compute-0 sshd-session[65774]: Accepted publickey for zuul from 192.168.122.30 port 49312 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:12:00 compute-0 systemd-logind[822]: New session 15 of user zuul.
Nov 25 10:12:00 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 25 10:12:00 compute-0 sshd-session[65774]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:12:01 compute-0 python3.9[65927]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:12:02 compute-0 sudo[66081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzdeguiimtgiblqvqhaguuwxbzbeulcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065521.6374838-33-221581017150116/AnsiballZ_file.py'
Nov 25 10:12:02 compute-0 sudo[66081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:02 compute-0 python3.9[66083]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:02 compute-0 sudo[66081]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:03 compute-0 sudo[66256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odrshzgokprtjdcsfzipizzlrdjjwawh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065522.5410717-41-229790516656347/AnsiballZ_stat.py'
Nov 25 10:12:03 compute-0 sudo[66256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:03 compute-0 python3.9[66258]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:03 compute-0 sudo[66256]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:03 compute-0 sudo[66334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yedhfexhrxuaeddilrbetyrojifxstig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065522.5410717-41-229790516656347/AnsiballZ_file.py'
Nov 25 10:12:03 compute-0 sudo[66334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:03 compute-0 python3.9[66336]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.clttxnnh recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:03 compute-0 sudo[66334]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:04 compute-0 sudo[66486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suwfnnxujktkpnnmjpsgxndgaqrxshwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065524.2823088-61-95668140584943/AnsiballZ_stat.py'
Nov 25 10:12:04 compute-0 sudo[66486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:04 compute-0 python3.9[66488]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:04 compute-0 sudo[66486]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:05 compute-0 sudo[66609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktepwjrsfwnowaygxogfmbzusaoepbss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065524.2823088-61-95668140584943/AnsiballZ_copy.py'
Nov 25 10:12:05 compute-0 sudo[66609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:05 compute-0 python3.9[66611]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065524.2823088-61-95668140584943/.source _original_basename=.2hd2x2rk follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:05 compute-0 sudo[66609]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:06 compute-0 sudo[66761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbvlinpxxqoqkgnnbiqgxkhdfeqsbwdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065525.7689667-77-24250116828790/AnsiballZ_file.py'
Nov 25 10:12:06 compute-0 sudo[66761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:06 compute-0 python3.9[66763]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:12:06 compute-0 sudo[66761]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:06 compute-0 sudo[66913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afhptvgcuwvvqeiyjfxvbdfzylxyzdqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065526.4694605-85-238360222737621/AnsiballZ_stat.py'
Nov 25 10:12:06 compute-0 sudo[66913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:07 compute-0 python3.9[66915]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:07 compute-0 sudo[66913]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:07 compute-0 sudo[67036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdrisayewzmfwdszjedixllpihvksplx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065526.4694605-85-238360222737621/AnsiballZ_copy.py'
Nov 25 10:12:07 compute-0 sudo[67036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:07 compute-0 python3.9[67038]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065526.4694605-85-238360222737621/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:12:07 compute-0 sudo[67036]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:08 compute-0 sudo[67188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orelwravpyxhfgqmxosqtfxbplydmpul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065527.8097813-85-273346768660757/AnsiballZ_stat.py'
Nov 25 10:12:08 compute-0 sudo[67188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:08 compute-0 python3.9[67190]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:08 compute-0 sudo[67188]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:08 compute-0 sudo[67311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvokdpdtfcmtevhggjdqlvxujlcnill ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065527.8097813-85-273346768660757/AnsiballZ_copy.py'
Nov 25 10:12:08 compute-0 sudo[67311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:08 compute-0 python3.9[67313]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065527.8097813-85-273346768660757/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:12:08 compute-0 sudo[67311]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:09 compute-0 sudo[67463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgshgytvkbxarglcearxggtoakewdvtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065529.1390095-114-135484501472997/AnsiballZ_file.py'
Nov 25 10:12:09 compute-0 sudo[67463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:09 compute-0 python3.9[67465]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:09 compute-0 sudo[67463]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:10 compute-0 sudo[67615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vppfzlfisfdzwfpzrdpfoimfknxogtll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065529.8891778-122-68776689727817/AnsiballZ_stat.py'
Nov 25 10:12:10 compute-0 sudo[67615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:10 compute-0 python3.9[67617]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:10 compute-0 sudo[67615]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:10 compute-0 sudo[67738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbyklxkjghhaauvdvjvzwddtineluopj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065529.8891778-122-68776689727817/AnsiballZ_copy.py'
Nov 25 10:12:10 compute-0 sudo[67738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:11 compute-0 python3.9[67740]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065529.8891778-122-68776689727817/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:11 compute-0 sudo[67738]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:11 compute-0 sudo[67890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osxirbrxhytolbuzjwoacyyidssghftf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065531.3022058-137-3698477379425/AnsiballZ_stat.py'
Nov 25 10:12:11 compute-0 sudo[67890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:11 compute-0 python3.9[67892]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:11 compute-0 sudo[67890]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:12 compute-0 sudo[68013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphwhdnbkxjhrwvinpzyaghvlwcjufme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065531.3022058-137-3698477379425/AnsiballZ_copy.py'
Nov 25 10:12:12 compute-0 sudo[68013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:12 compute-0 python3.9[68015]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065531.3022058-137-3698477379425/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:12 compute-0 sudo[68013]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:13 compute-0 sudo[68165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rosgqpwlamfsihwyatoeznbkdkgwekvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065532.608938-152-164107121559136/AnsiballZ_systemd.py'
Nov 25 10:12:13 compute-0 sudo[68165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:13 compute-0 python3.9[68167]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:12:13 compute-0 systemd[1]: Reloading.
Nov 25 10:12:13 compute-0 systemd-rc-local-generator[68190]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:12:13 compute-0 systemd-sysv-generator[68193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:12:13 compute-0 systemd[1]: Reloading.
Nov 25 10:12:13 compute-0 systemd-rc-local-generator[68231]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:12:13 compute-0 systemd-sysv-generator[68236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:12:14 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 25 10:12:14 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 25 10:12:14 compute-0 sudo[68165]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:14 compute-0 sudo[68395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhezejjaninafxeokbyzzaqpcqflsjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065534.2925231-160-176895325557492/AnsiballZ_stat.py'
Nov 25 10:12:14 compute-0 sudo[68395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:14 compute-0 python3.9[68397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:14 compute-0 sudo[68395]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:15 compute-0 sudo[68518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfifracdmbkuiwtejrpgpwhkvbuzgwxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065534.2925231-160-176895325557492/AnsiballZ_copy.py'
Nov 25 10:12:15 compute-0 sudo[68518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:15 compute-0 python3.9[68520]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065534.2925231-160-176895325557492/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:15 compute-0 sudo[68518]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:15 compute-0 sudo[68670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsbmdaxqjrxtsdhrwcgunuzgvkvruwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065535.5893657-175-108394407552414/AnsiballZ_stat.py'
Nov 25 10:12:15 compute-0 sudo[68670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:15 compute-0 python3.9[68672]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:16 compute-0 sudo[68670]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:16 compute-0 sudo[68793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtyfudsfjoolfsxszacuuedzyitvnoua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065535.5893657-175-108394407552414/AnsiballZ_copy.py'
Nov 25 10:12:16 compute-0 sudo[68793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:16 compute-0 python3.9[68795]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065535.5893657-175-108394407552414/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:16 compute-0 sudo[68793]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:17 compute-0 sudo[68945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvcrfnytidjpntkurbmrmlqvwleycol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065536.7927747-190-122942708011794/AnsiballZ_systemd.py'
Nov 25 10:12:17 compute-0 sudo[68945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:17 compute-0 python3.9[68947]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:12:17 compute-0 systemd[1]: Reloading.
Nov 25 10:12:17 compute-0 systemd-rc-local-generator[68971]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:12:17 compute-0 systemd-sysv-generator[68975]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:12:17 compute-0 systemd[1]: Reloading.
Nov 25 10:12:17 compute-0 systemd-rc-local-generator[69006]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:12:17 compute-0 systemd-sysv-generator[69010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:12:17 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 10:12:17 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 10:12:17 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 10:12:17 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 10:12:17 compute-0 sudo[68945]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:18 compute-0 python3.9[69175]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:12:18 compute-0 network[69192]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:12:18 compute-0 network[69193]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:12:18 compute-0 network[69194]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:12:23 compute-0 sudo[69454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzgmhmmyxcdvolrgshqazkvtnfgqbjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065543.0673342-206-172982910193908/AnsiballZ_systemd.py'
Nov 25 10:12:23 compute-0 sudo[69454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:23 compute-0 python3.9[69456]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:12:23 compute-0 systemd[1]: Reloading.
Nov 25 10:12:23 compute-0 systemd-sysv-generator[69489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:12:23 compute-0 systemd-rc-local-generator[69485]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:12:23 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 25 10:12:24 compute-0 iptables.init[69496]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 25 10:12:24 compute-0 iptables.init[69496]: iptables: Flushing firewall rules: [  OK  ]
Nov 25 10:12:24 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 25 10:12:24 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 25 10:12:24 compute-0 sudo[69454]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:24 compute-0 sudo[69690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpumcacoqifebendjabfvrzrofifrfqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065544.52477-206-222779212613263/AnsiballZ_systemd.py'
Nov 25 10:12:24 compute-0 sudo[69690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:25 compute-0 python3.9[69692]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:12:25 compute-0 sudo[69690]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:25 compute-0 sudo[69844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqobpwdiqukjkdmczvoyvpbvczersded ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065545.3976293-222-45231638598085/AnsiballZ_systemd.py'
Nov 25 10:12:25 compute-0 sudo[69844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:26 compute-0 python3.9[69846]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:12:26 compute-0 systemd[1]: Reloading.
Nov 25 10:12:26 compute-0 systemd-sysv-generator[69879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:12:26 compute-0 systemd-rc-local-generator[69876]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:12:26 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 25 10:12:26 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 25 10:12:26 compute-0 sudo[69844]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:27 compute-0 sudo[70036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxzlbvvcqyoehxwjqyfilxswlbsxaej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065546.733745-230-16868995410005/AnsiballZ_command.py'
Nov 25 10:12:27 compute-0 sudo[70036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:27 compute-0 python3.9[70038]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:12:27 compute-0 sudo[70036]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:28 compute-0 sudo[70189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pohdgfmchhwrbljzteibybbdlolwzgvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065547.8532321-244-55701160725790/AnsiballZ_stat.py'
Nov 25 10:12:28 compute-0 sudo[70189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:28 compute-0 python3.9[70191]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:28 compute-0 sudo[70189]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:28 compute-0 sudo[70314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrhlsdtzpkijmkxtttdwtpyddxhrykk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065547.8532321-244-55701160725790/AnsiballZ_copy.py'
Nov 25 10:12:28 compute-0 sudo[70314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:29 compute-0 python3.9[70316]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065547.8532321-244-55701160725790/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:29 compute-0 sudo[70314]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:29 compute-0 sudo[70467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgkcntptvurbcybeqzigcrpkrswvjfda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065549.2578883-259-43279669400388/AnsiballZ_systemd.py'
Nov 25 10:12:29 compute-0 sudo[70467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:29 compute-0 python3.9[70469]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:12:29 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 25 10:12:29 compute-0 sshd[1011]: Received SIGHUP; restarting.
Nov 25 10:12:29 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 25 10:12:29 compute-0 sshd[1011]: Server listening on 0.0.0.0 port 22.
Nov 25 10:12:29 compute-0 sshd[1011]: Server listening on :: port 22.
Nov 25 10:12:29 compute-0 sudo[70467]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:30 compute-0 sudo[70623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-othixwdgkzuarctozjmgfgtonovwoews ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065550.2664356-267-8401369680720/AnsiballZ_file.py'
Nov 25 10:12:30 compute-0 sudo[70623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:30 compute-0 python3.9[70625]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:30 compute-0 sudo[70623]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:31 compute-0 sudo[70775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphgxqessnxblzgudkyawiznwbpywatn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065551.0626252-275-191507752080107/AnsiballZ_stat.py'
Nov 25 10:12:31 compute-0 sudo[70775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:31 compute-0 python3.9[70777]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:31 compute-0 sudo[70775]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:32 compute-0 sudo[70898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmtzyvjftxdtuazfdycdwxmfegluzxpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065551.0626252-275-191507752080107/AnsiballZ_copy.py'
Nov 25 10:12:32 compute-0 sudo[70898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:32 compute-0 python3.9[70900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065551.0626252-275-191507752080107/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:32 compute-0 sudo[70898]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:35 compute-0 sudo[71050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqxjywqyrpxhaoiaavxwebserackdnnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065552.5371568-293-114454774879881/AnsiballZ_timezone.py'
Nov 25 10:12:35 compute-0 sudo[71050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:35 compute-0 python3.9[71052]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 25 10:12:35 compute-0 systemd[1]: Starting Time & Date Service...
Nov 25 10:12:35 compute-0 systemd[1]: Started Time & Date Service.
Nov 25 10:12:35 compute-0 sudo[71050]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:36 compute-0 sudo[71206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgillxzftyworiaiyvixyehhwbcmncdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065556.0739245-302-209405643968058/AnsiballZ_file.py'
Nov 25 10:12:36 compute-0 sudo[71206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:36 compute-0 python3.9[71208]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:36 compute-0 sudo[71206]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:37 compute-0 sudo[71358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuzsownsiifzdbogjnrmmxwoydeascja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065556.808083-310-66937046766077/AnsiballZ_stat.py'
Nov 25 10:12:37 compute-0 sudo[71358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:37 compute-0 python3.9[71360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:37 compute-0 sudo[71358]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:37 compute-0 sudo[71481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvqseomuyjhxdfzypiwdfrrainnewqtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065556.808083-310-66937046766077/AnsiballZ_copy.py'
Nov 25 10:12:37 compute-0 sudo[71481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:38 compute-0 python3.9[71483]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065556.808083-310-66937046766077/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:38 compute-0 sudo[71481]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:38 compute-0 sudo[71633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hobsxcmsvpqxjbvanmhogcactgmorfpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065558.226023-325-143493082828986/AnsiballZ_stat.py'
Nov 25 10:12:38 compute-0 sudo[71633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:38 compute-0 python3.9[71635]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:38 compute-0 sudo[71633]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:39 compute-0 sudo[71756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuigihimbhhgquqqmhzyzqpkyvmkulbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065558.226023-325-143493082828986/AnsiballZ_copy.py'
Nov 25 10:12:39 compute-0 sudo[71756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:39 compute-0 python3.9[71758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065558.226023-325-143493082828986/.source.yaml _original_basename=.wkk43qhq follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:39 compute-0 sudo[71756]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:40 compute-0 sudo[71908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxgcicedwwuvuxlepgxtjmgdmcyplnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065559.6946433-340-226288838295541/AnsiballZ_stat.py'
Nov 25 10:12:40 compute-0 sudo[71908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:40 compute-0 python3.9[71910]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:40 compute-0 sudo[71908]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:40 compute-0 sudo[72031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcrsxypanrwhowqzgbcpnrtlqiwrwdre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065559.6946433-340-226288838295541/AnsiballZ_copy.py'
Nov 25 10:12:40 compute-0 sudo[72031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:40 compute-0 python3.9[72033]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065559.6946433-340-226288838295541/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:40 compute-0 sudo[72031]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:41 compute-0 sudo[72183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bugdrxnsljulxtjrsvztglaibivczxsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065561.1265342-355-167622279181802/AnsiballZ_command.py'
Nov 25 10:12:41 compute-0 sudo[72183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:41 compute-0 python3.9[72185]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:12:41 compute-0 sudo[72183]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:42 compute-0 sudo[72336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjolexrzvtmpnzlhagyvrbzmlsfvtof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065561.9173355-363-101683900328301/AnsiballZ_command.py'
Nov 25 10:12:42 compute-0 sudo[72336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:42 compute-0 python3.9[72338]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:12:42 compute-0 sudo[72336]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:43 compute-0 sudo[72489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-netikpoubufutstkkwxivwdvmbgiluho ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764065562.7491121-371-281063709533448/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 10:12:43 compute-0 sudo[72489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:43 compute-0 python3[72491]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 10:12:43 compute-0 sudo[72489]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:44 compute-0 sudo[72641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbxmwobeifbbmsvrvbdrndbslikrkzvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065563.7115023-379-143462087248726/AnsiballZ_stat.py'
Nov 25 10:12:44 compute-0 sudo[72641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:44 compute-0 python3.9[72643]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:44 compute-0 sudo[72641]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:44 compute-0 sudo[72764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljwfasarditvwwonymyvakfvdascqwpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065563.7115023-379-143462087248726/AnsiballZ_copy.py'
Nov 25 10:12:44 compute-0 sudo[72764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:45 compute-0 python3.9[72766]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065563.7115023-379-143462087248726/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:45 compute-0 sudo[72764]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:45 compute-0 sudo[72916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccpaeubqkuveoxzhxgjedlqzmpykhtqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065565.6028945-394-125363786677689/AnsiballZ_stat.py'
Nov 25 10:12:45 compute-0 sudo[72916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:46 compute-0 python3.9[72918]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:46 compute-0 sudo[72916]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:46 compute-0 sudo[73039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anpxrhmtakmxtibxzhdwiqhckusaflab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065565.6028945-394-125363786677689/AnsiballZ_copy.py'
Nov 25 10:12:46 compute-0 sudo[73039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:46 compute-0 python3.9[73041]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065565.6028945-394-125363786677689/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:46 compute-0 sudo[73039]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:47 compute-0 sudo[73191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vegtomutnztzbovqunhjlejolsygxpkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065567.1250098-409-138916184362509/AnsiballZ_stat.py'
Nov 25 10:12:47 compute-0 sudo[73191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:47 compute-0 python3.9[73193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:47 compute-0 sudo[73191]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:48 compute-0 sudo[73314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgnfcihxencmruxtjsbgmxtyvodglmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065567.1250098-409-138916184362509/AnsiballZ_copy.py'
Nov 25 10:12:48 compute-0 sudo[73314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:48 compute-0 python3.9[73316]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065567.1250098-409-138916184362509/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:48 compute-0 sudo[73314]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:48 compute-0 sudo[73466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwaywdsmdjwhsmsbtfhfxfwuazeipyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065568.6003602-424-43029882177017/AnsiballZ_stat.py'
Nov 25 10:12:48 compute-0 sudo[73466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:49 compute-0 python3.9[73468]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:49 compute-0 sudo[73466]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:49 compute-0 sudo[73589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpqfsnldyeieadlrilzfycwizatwjll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065568.6003602-424-43029882177017/AnsiballZ_copy.py'
Nov 25 10:12:49 compute-0 sudo[73589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:49 compute-0 python3.9[73591]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065568.6003602-424-43029882177017/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:49 compute-0 sudo[73589]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:50 compute-0 sudo[73741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqeodktfzlefobmmsadvmeadamzeirof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065570.059579-439-82231277560811/AnsiballZ_stat.py'
Nov 25 10:12:50 compute-0 sudo[73741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:50 compute-0 python3.9[73743]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:12:50 compute-0 sudo[73741]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:51 compute-0 sudo[73864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvmvzfgvderwjesoehivokxzvjupqpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065570.059579-439-82231277560811/AnsiballZ_copy.py'
Nov 25 10:12:51 compute-0 sudo[73864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:51 compute-0 python3.9[73866]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065570.059579-439-82231277560811/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:51 compute-0 sudo[73864]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:51 compute-0 sudo[74016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdaktbgzgbesuocsxetcamfvqwplcknm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065571.5794635-454-203856516500847/AnsiballZ_file.py'
Nov 25 10:12:51 compute-0 sudo[74016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:52 compute-0 python3.9[74018]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:52 compute-0 sudo[74016]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:52 compute-0 sudo[74168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caqqcmbbmwnapwmgunkghbfwooxzhlss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065572.335985-462-3699713194427/AnsiballZ_command.py'
Nov 25 10:12:52 compute-0 sudo[74168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:52 compute-0 python3.9[74170]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:12:53 compute-0 sudo[74168]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:53 compute-0 sudo[74327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfmbehtttrwnefqqkcdwokgfezopqccw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065573.188148-470-57605630500922/AnsiballZ_blockinfile.py'
Nov 25 10:12:53 compute-0 sudo[74327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:53 compute-0 python3.9[74329]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:53 compute-0 sudo[74327]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:54 compute-0 sudo[74480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfhxnynmqhbqegcckifzdfuvwdbawcbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065574.1902301-479-241053691810951/AnsiballZ_file.py'
Nov 25 10:12:54 compute-0 sudo[74480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:54 compute-0 python3.9[74482]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:54 compute-0 sudo[74480]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:55 compute-0 sudo[74632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvmxmegfwwfoxiyncdqncryjptjyyooq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065574.852895-479-176558239337914/AnsiballZ_file.py'
Nov 25 10:12:55 compute-0 sudo[74632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:55 compute-0 python3.9[74634]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:12:55 compute-0 sudo[74632]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:56 compute-0 sudo[74784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvafbxmdlehjrujluhxmirgofdbzxjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065575.5994418-494-246394794363929/AnsiballZ_mount.py'
Nov 25 10:12:56 compute-0 sudo[74784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:56 compute-0 python3.9[74786]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 10:12:56 compute-0 sudo[74784]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:56 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:12:56 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:12:56 compute-0 sudo[74938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybbrnzpujdnkynaelrxukwxcyagdtnyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065576.56473-494-175974427091858/AnsiballZ_mount.py'
Nov 25 10:12:56 compute-0 sudo[74938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:12:57 compute-0 python3.9[74940]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 25 10:12:57 compute-0 sudo[74938]: pam_unix(sudo:session): session closed for user root
Nov 25 10:12:57 compute-0 sshd-session[65777]: Connection closed by 192.168.122.30 port 49312
Nov 25 10:12:57 compute-0 sshd-session[65774]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:12:57 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 25 10:12:57 compute-0 systemd[1]: session-15.scope: Consumed 40.012s CPU time.
Nov 25 10:12:57 compute-0 systemd-logind[822]: Session 15 logged out. Waiting for processes to exit.
Nov 25 10:12:57 compute-0 systemd-logind[822]: Removed session 15.
Nov 25 10:13:02 compute-0 sshd-session[74966]: Accepted publickey for zuul from 192.168.122.30 port 38322 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:13:02 compute-0 systemd-logind[822]: New session 16 of user zuul.
Nov 25 10:13:02 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 25 10:13:02 compute-0 sshd-session[74966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:13:03 compute-0 sudo[75119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btlwywsfjxhfceptnnrtpwiquljfgoma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065582.9077375-16-187029522520362/AnsiballZ_tempfile.py'
Nov 25 10:13:03 compute-0 sudo[75119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:03 compute-0 python3.9[75121]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 25 10:13:03 compute-0 sudo[75119]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:04 compute-0 sudo[75271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cozcdovebbchjswupslrdtpqbsfdsqtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065583.8619847-28-270948887731368/AnsiballZ_stat.py'
Nov 25 10:13:04 compute-0 sudo[75271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:04 compute-0 python3.9[75273]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:13:04 compute-0 sudo[75271]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:05 compute-0 sudo[75423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crqgybwjxfqshfkloabpdoidqhrlainh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065584.9569232-38-119298120875966/AnsiballZ_setup.py'
Nov 25 10:13:05 compute-0 sudo[75423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:05 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 25 10:13:06 compute-0 python3.9[75425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:13:06 compute-0 sudo[75423]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:06 compute-0 sudo[75577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngzrmwrzjjqyqfssgcadkhoenralbkyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065586.2654681-47-127920525599318/AnsiballZ_blockinfile.py'
Nov 25 10:13:06 compute-0 sudo[75577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:06 compute-0 python3.9[75579]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMlmTojGFGJ5rcdyS8/iEkejBgeQaiCeWo0uJpepGb+i9RfEzZCPGnLERv4kt/9xN7YmGZ0ZJ5n0C1JV+wdhjFQHwlEvTvW0yQjgvADAljvKWAMb+UI82jzJSwdk1iCkpAYiWEAmeGHGbJISgklmVLTIsutdEW6cl+MWZMmk9GhdtXYWob4iHjhfEoKM4g3dEE/rhBac6Zf7f+XEWvbWil8YZCkN0bWsBPmWSi4iPH5HPfiR6idK506LtftgsTMi0yy4j6ii7hLbAq3xwKNS7JCUJxEPJjTxxdxWMNfvNW3MZOq7G5egRPYQ4Bd1menkPj4tjcUBhuW56htXjmeqzgpFpe3MXbnO/VomGx0bmrADG94dK7LZcl3BfMufWBNTly4t2YjeHmuH2PooNtWC76SLEY3dKzp4EUOshSGhMo0IVFeaU8pMRMaCWsfgVw6qb9we+zK5iAW7pu78yV/geCaLgz8Vlnc8Hg1j44OxeAlyj0lE+O6FcHyztCPhpyvWc=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHOv7T7z7z/+3Zx6NtqBERaqkSzC+UUv4tiBhX66y7s+
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK6DKw5MP75QsMBTrO7xKEVM0oHocpH1Y1sOiWb2dqS1J6AkHhB24rkF9wI96XDh06Ne9nZuewVB2n/moE2bevE=
                                             create=True mode=0644 path=/tmp/ansible.vod8kdqu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:06 compute-0 sudo[75577]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:07 compute-0 sudo[75729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jquacylyjpmcrshrmjjjybakxtpwqvoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065587.116965-55-178619290846683/AnsiballZ_command.py'
Nov 25 10:13:07 compute-0 sudo[75729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:07 compute-0 python3.9[75731]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vod8kdqu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:13:07 compute-0 sudo[75729]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:08 compute-0 sudo[75883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdukczzmtpgpgczgdgcdnbxyhsalarfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065587.9626575-63-149583331956669/AnsiballZ_file.py'
Nov 25 10:13:08 compute-0 sudo[75883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:08 compute-0 python3.9[75885]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vod8kdqu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:08 compute-0 sudo[75883]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:08 compute-0 sshd-session[74969]: Connection closed by 192.168.122.30 port 38322
Nov 25 10:13:08 compute-0 sshd-session[74966]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:13:08 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 25 10:13:08 compute-0 systemd[1]: session-16.scope: Consumed 3.596s CPU time.
Nov 25 10:13:08 compute-0 systemd-logind[822]: Session 16 logged out. Waiting for processes to exit.
Nov 25 10:13:08 compute-0 systemd-logind[822]: Removed session 16.
Nov 25 10:13:15 compute-0 sshd-session[75910]: Accepted publickey for zuul from 192.168.122.30 port 46728 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:13:15 compute-0 systemd-logind[822]: New session 17 of user zuul.
Nov 25 10:13:15 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 25 10:13:15 compute-0 sshd-session[75910]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:13:16 compute-0 python3.9[76063]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:13:17 compute-0 sudo[76217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgvvuxmpzqusjehvheukckvhawzoftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065596.4415367-32-174636640204869/AnsiballZ_systemd.py'
Nov 25 10:13:17 compute-0 sudo[76217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:17 compute-0 python3.9[76219]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 10:13:17 compute-0 sudo[76217]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:17 compute-0 sudo[76371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpnrxzqeizhatwlkxxdnqxskjlykunnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065597.6295497-40-162603655044012/AnsiballZ_systemd.py'
Nov 25 10:13:17 compute-0 sudo[76371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:18 compute-0 python3.9[76373]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:13:18 compute-0 sudo[76371]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:19 compute-0 sudo[76524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzrkubiynvclkzhorfodqcotapsinfmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065598.6469393-49-223568437689554/AnsiballZ_command.py'
Nov 25 10:13:19 compute-0 sudo[76524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:19 compute-0 python3.9[76526]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:13:19 compute-0 sudo[76524]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:20 compute-0 sudo[76677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqwofvcofkuslmrmhshucprmwssgdpyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065599.8148713-57-119669111674214/AnsiballZ_stat.py'
Nov 25 10:13:20 compute-0 sudo[76677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:20 compute-0 python3.9[76679]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:13:20 compute-0 sudo[76677]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:21 compute-0 sudo[76831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbgvafvnyzuqncawqhgupnfzwjzhemsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065600.7061064-65-154754142146403/AnsiballZ_command.py'
Nov 25 10:13:21 compute-0 sudo[76831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:21 compute-0 python3.9[76833]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:13:21 compute-0 sudo[76831]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:21 compute-0 sudo[76986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujhproyaxfmbxeodtoorfzlxigixkpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065601.3814707-73-220544234083768/AnsiballZ_file.py'
Nov 25 10:13:21 compute-0 sudo[76986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:22 compute-0 python3.9[76988]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:22 compute-0 sudo[76986]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:22 compute-0 sshd-session[75913]: Connection closed by 192.168.122.30 port 46728
Nov 25 10:13:22 compute-0 sshd-session[75910]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:13:22 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 25 10:13:22 compute-0 systemd[1]: session-17.scope: Consumed 4.772s CPU time.
Nov 25 10:13:22 compute-0 systemd-logind[822]: Session 17 logged out. Waiting for processes to exit.
Nov 25 10:13:22 compute-0 systemd-logind[822]: Removed session 17.
Nov 25 10:13:27 compute-0 sshd-session[77013]: Accepted publickey for zuul from 192.168.122.30 port 42914 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:13:27 compute-0 systemd-logind[822]: New session 18 of user zuul.
Nov 25 10:13:27 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 25 10:13:27 compute-0 sshd-session[77013]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:13:29 compute-0 python3.9[77166]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:13:30 compute-0 sudo[77320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgkayhkksecgkysacuxawdjlulrxocff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065609.67765-34-103219941305903/AnsiballZ_setup.py'
Nov 25 10:13:30 compute-0 sudo[77320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:30 compute-0 python3.9[77322]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:13:30 compute-0 sudo[77320]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:31 compute-0 sudo[77404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amwnisrynngcderggxsbiqzimllzlrop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065609.67765-34-103219941305903/AnsiballZ_dnf.py'
Nov 25 10:13:31 compute-0 sudo[77404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:31 compute-0 python3.9[77406]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 25 10:13:32 compute-0 sudo[77404]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:33 compute-0 python3.9[77557]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:13:34 compute-0 python3.9[77708]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:13:35 compute-0 python3.9[77858]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:13:36 compute-0 python3.9[78008]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:13:37 compute-0 sshd-session[77016]: Connection closed by 192.168.122.30 port 42914
Nov 25 10:13:37 compute-0 sshd-session[77013]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:13:37 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 25 10:13:37 compute-0 systemd[1]: session-18.scope: Consumed 6.650s CPU time.
Nov 25 10:13:37 compute-0 systemd-logind[822]: Session 18 logged out. Waiting for processes to exit.
Nov 25 10:13:37 compute-0 systemd-logind[822]: Removed session 18.
Nov 25 10:13:42 compute-0 sshd-session[78033]: Accepted publickey for zuul from 192.168.122.30 port 45588 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:13:42 compute-0 systemd-logind[822]: New session 19 of user zuul.
Nov 25 10:13:42 compute-0 systemd[1]: Started Session 19 of User zuul.
Nov 25 10:13:42 compute-0 sshd-session[78033]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:13:43 compute-0 python3.9[78186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:13:44 compute-0 sudo[78340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwxslwsezluemptqvpdanlctgyuirmci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065624.453315-50-187285960885278/AnsiballZ_file.py'
Nov 25 10:13:44 compute-0 sudo[78340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:45 compute-0 python3.9[78342]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:13:45 compute-0 sudo[78340]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:45 compute-0 sudo[78492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlayzlavcjqnxppmvpghjqbfghbvvopz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065625.3329499-50-58043586302952/AnsiballZ_file.py'
Nov 25 10:13:45 compute-0 sudo[78492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:45 compute-0 python3.9[78494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:13:45 compute-0 sudo[78492]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:46 compute-0 sudo[78644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdwqrurdnhncmchyecvbtqsjolmfwrhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065626.0757287-65-91505540666220/AnsiballZ_stat.py'
Nov 25 10:13:46 compute-0 sudo[78644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:46 compute-0 python3.9[78646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:46 compute-0 sudo[78644]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:47 compute-0 sudo[78767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tveqhsulkzrahkqyzxjpueswqdmlpkgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065626.0757287-65-91505540666220/AnsiballZ_copy.py'
Nov 25 10:13:47 compute-0 sudo[78767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:47 compute-0 python3.9[78769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065626.0757287-65-91505540666220/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ae440993f9974d577c3adde16d16061eeaa034e3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:47 compute-0 sudo[78767]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:48 compute-0 sudo[78919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylcwnefqhcgjgxkryuoswbsnmqfjmbzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065627.723508-65-177989987911161/AnsiballZ_stat.py'
Nov 25 10:13:48 compute-0 sudo[78919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:48 compute-0 python3.9[78921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:48 compute-0 sudo[78919]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:48 compute-0 sudo[79042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uotlzaaszmksgbktohfyddjpaunljhzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065627.723508-65-177989987911161/AnsiballZ_copy.py'
Nov 25 10:13:48 compute-0 sudo[79042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:48 compute-0 python3.9[79044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065627.723508-65-177989987911161/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f886beb4e844a1e3960ffd3b23efc9359490a0f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:48 compute-0 sudo[79042]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:49 compute-0 sudo[79194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkfokyyzochlaaiukwirxrrsaaezjfsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065629.0489447-65-126760734962803/AnsiballZ_stat.py'
Nov 25 10:13:49 compute-0 sudo[79194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:49 compute-0 python3.9[79196]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:49 compute-0 sudo[79194]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:50 compute-0 sudo[79317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myfwasfnvcxpwsojwgqpetuftxdbxtvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065629.0489447-65-126760734962803/AnsiballZ_copy.py'
Nov 25 10:13:50 compute-0 sudo[79317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:50 compute-0 python3.9[79319]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065629.0489447-65-126760734962803/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f6c344b5d8b2033ab03a397a9e6b66125dbd2a2c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:50 compute-0 sudo[79317]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:50 compute-0 sudo[79469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiexefkosjhwyhyvxbqghrouwkpxcqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065630.5290134-109-256266430135011/AnsiballZ_file.py'
Nov 25 10:13:50 compute-0 sudo[79469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:51 compute-0 python3.9[79471]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:13:51 compute-0 sudo[79469]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:51 compute-0 sudo[79621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yekmhltmspemqasaevgvlpcvnzclysys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065631.1792545-109-69121018060208/AnsiballZ_file.py'
Nov 25 10:13:51 compute-0 sudo[79621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:51 compute-0 python3.9[79623]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:13:51 compute-0 sudo[79621]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:52 compute-0 sudo[79773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byqdjtloabxifewtsymiyykzgbdrkuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065631.8643034-124-248312309403069/AnsiballZ_stat.py'
Nov 25 10:13:52 compute-0 sudo[79773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:52 compute-0 python3.9[79775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:52 compute-0 sudo[79773]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:52 compute-0 sudo[79896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgwneupvtflcgkhsdaqlkbjfaoyimua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065631.8643034-124-248312309403069/AnsiballZ_copy.py'
Nov 25 10:13:52 compute-0 sudo[79896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:52 compute-0 python3.9[79898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065631.8643034-124-248312309403069/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=24f8c0184e3fe50ce47e29e117afe24c1438344e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:53 compute-0 sudo[79896]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:53 compute-0 sudo[80048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbmzntrcaauraqmdrluvtvtmwgriknbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065633.1745484-124-89032653688875/AnsiballZ_stat.py'
Nov 25 10:13:53 compute-0 sudo[80048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:53 compute-0 python3.9[80050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:53 compute-0 sudo[80048]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:54 compute-0 sudo[80171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynwbbfllhylirbpcniapllzbwrwmcfhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065633.1745484-124-89032653688875/AnsiballZ_copy.py'
Nov 25 10:13:54 compute-0 sudo[80171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:54 compute-0 python3.9[80173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065633.1745484-124-89032653688875/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=f886beb4e844a1e3960ffd3b23efc9359490a0f4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:54 compute-0 sudo[80171]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:54 compute-0 sudo[80323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnmhsxqbfojzrlfyaeodayrqyxflrupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065634.5042982-124-269172061255904/AnsiballZ_stat.py'
Nov 25 10:13:54 compute-0 sudo[80323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:55 compute-0 python3.9[80325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:55 compute-0 sudo[80323]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:55 compute-0 sudo[80446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkzqdydpwpafxbmmaebyynlddjjoplob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065634.5042982-124-269172061255904/AnsiballZ_copy.py'
Nov 25 10:13:55 compute-0 sudo[80446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:55 compute-0 python3.9[80448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065634.5042982-124-269172061255904/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=50b598e9a654371d3cc3a73f75d39df413e3ece7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:55 compute-0 sudo[80446]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:56 compute-0 sudo[80598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zubsithvifadtvxjoygswkqefikjcomy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065635.898255-168-237394846415312/AnsiballZ_file.py'
Nov 25 10:13:56 compute-0 sudo[80598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:56 compute-0 python3.9[80600]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:13:56 compute-0 sudo[80598]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:56 compute-0 sudo[80750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oymuyyrvzbglqfmsresttqmbiaiqgqwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065636.615511-168-152852917758244/AnsiballZ_file.py'
Nov 25 10:13:56 compute-0 sudo[80750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:57 compute-0 python3.9[80752]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:13:57 compute-0 sudo[80750]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:57 compute-0 sudo[80902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkvmsvlegzqndkxsjxglffngmdyfnxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065637.359864-183-75535872255996/AnsiballZ_stat.py'
Nov 25 10:13:57 compute-0 sudo[80902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:57 compute-0 python3.9[80904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:57 compute-0 sudo[80902]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:58 compute-0 sudo[81025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obgjyqgnrqnovfclwllvcwofyarjuqiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065637.359864-183-75535872255996/AnsiballZ_copy.py'
Nov 25 10:13:58 compute-0 sudo[81025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:58 compute-0 python3.9[81027]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065637.359864-183-75535872255996/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=68a9d8cf07a59a58d9e10c1d8d5ed115617a77c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:58 compute-0 sudo[81025]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:58 compute-0 sudo[81177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhzdbtkczogoqjlujbevvoqucjxifyos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065638.6311207-183-123174606286669/AnsiballZ_stat.py'
Nov 25 10:13:58 compute-0 sudo[81177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:59 compute-0 python3.9[81179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:13:59 compute-0 sudo[81177]: pam_unix(sudo:session): session closed for user root
Nov 25 10:13:59 compute-0 sudo[81300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnjshjwpcsvhabkgupjxvrlkbbvxxngi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065638.6311207-183-123174606286669/AnsiballZ_copy.py'
Nov 25 10:13:59 compute-0 sudo[81300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:13:59 compute-0 python3.9[81302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065638.6311207-183-123174606286669/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ebfb1b7ead53e10e15811d36286c0667f2300c69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:13:59 compute-0 sudo[81300]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:00 compute-0 sudo[81452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzllmpxeiuapoeyzanmbyybrkofoixqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065640.078236-183-203985645248939/AnsiballZ_stat.py'
Nov 25 10:14:00 compute-0 sudo[81452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:00 compute-0 python3.9[81454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:00 compute-0 sudo[81452]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:01 compute-0 sudo[81575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beramgaldacrwqfmhkqwanuuuhshmpxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065640.078236-183-203985645248939/AnsiballZ_copy.py'
Nov 25 10:14:01 compute-0 sudo[81575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:01 compute-0 python3.9[81577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065640.078236-183-203985645248939/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=69815b6af00bd4eb5a41f39cf9a66aae05e1080c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:01 compute-0 sudo[81575]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:01 compute-0 sudo[81727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnaibtqfpzhfbouaunxawwswvfvliugx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065641.4805295-227-88346092714949/AnsiballZ_file.py'
Nov 25 10:14:01 compute-0 sudo[81727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:02 compute-0 python3.9[81729]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:02 compute-0 sudo[81727]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:02 compute-0 sudo[81879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evrinpcreufmgjchfjgotuvgkbqjeqbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065642.3061347-227-268841490560977/AnsiballZ_file.py'
Nov 25 10:14:02 compute-0 sudo[81879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:02 compute-0 python3.9[81881]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:02 compute-0 sudo[81879]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:03 compute-0 sudo[82031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndpfeencqdzwogrrwakcwlpiddazukfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065643.0096123-242-82325208113810/AnsiballZ_stat.py'
Nov 25 10:14:03 compute-0 sudo[82031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:03 compute-0 python3.9[82033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:03 compute-0 sudo[82031]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:03 compute-0 sudo[82154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayalnfrpgvltburvqrqepkpktnyqqasa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065643.0096123-242-82325208113810/AnsiballZ_copy.py'
Nov 25 10:14:03 compute-0 sudo[82154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:04 compute-0 chronyd[65748]: Selected source 23.159.16.194 (pool.ntp.org)
Nov 25 10:14:04 compute-0 python3.9[82156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065643.0096123-242-82325208113810/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0a57f91f44e352a232444b87882ff8cad90d012d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:04 compute-0 sudo[82154]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:04 compute-0 sudo[82306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaddevnqyubnydaivsptretpnpvjidwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065644.3321192-242-110159238758640/AnsiballZ_stat.py'
Nov 25 10:14:04 compute-0 sudo[82306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:04 compute-0 python3.9[82308]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:04 compute-0 sudo[82306]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:05 compute-0 sudo[82429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alucfkfgurlhbgfbhvllwtabosgbejzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065644.3321192-242-110159238758640/AnsiballZ_copy.py'
Nov 25 10:14:05 compute-0 sudo[82429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:05 compute-0 python3.9[82431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065644.3321192-242-110159238758640/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ad7c9d059555468fde3690aec99e2b0e99b1b7fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:05 compute-0 sudo[82429]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:06 compute-0 sudo[82581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtiuexoqysbcbhkterauyrcqnruoyzjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065645.8126643-242-122261532035166/AnsiballZ_stat.py'
Nov 25 10:14:06 compute-0 sudo[82581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:06 compute-0 python3.9[82583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:06 compute-0 sudo[82581]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:06 compute-0 sudo[82704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdxwzdzowpfyschttqemcerdcqjtindj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065645.8126643-242-122261532035166/AnsiballZ_copy.py'
Nov 25 10:14:06 compute-0 sudo[82704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:07 compute-0 python3.9[82706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065645.8126643-242-122261532035166/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=04c626193f2ab4a2de834de638e3ba307af9c49b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:07 compute-0 sudo[82704]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:07 compute-0 sudo[82856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobvufasnkwbanteamvbjuwgyffhrnjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065647.3882208-286-169741328436941/AnsiballZ_file.py'
Nov 25 10:14:07 compute-0 sudo[82856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:08 compute-0 python3.9[82858]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:08 compute-0 sudo[82856]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:08 compute-0 sudo[83008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuwivxzzjyfurgbukwjalmwpunkymfjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065648.265056-286-204322103975381/AnsiballZ_file.py'
Nov 25 10:14:08 compute-0 sudo[83008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:08 compute-0 python3.9[83010]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:08 compute-0 sudo[83008]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:09 compute-0 sudo[83160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-strnizfxnqembcuunwjuwcdwrrltcdnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065649.005507-301-276371490798729/AnsiballZ_stat.py'
Nov 25 10:14:09 compute-0 sudo[83160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:09 compute-0 python3.9[83162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:09 compute-0 sudo[83160]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:09 compute-0 sudo[83283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlfknhcsfqeaaflmatwwzgosjewdvkxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065649.005507-301-276371490798729/AnsiballZ_copy.py'
Nov 25 10:14:09 compute-0 sudo[83283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:10 compute-0 python3.9[83285]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065649.005507-301-276371490798729/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=eebf4456d33d7eecfec7977b2eadc8d010c326ab backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:10 compute-0 sudo[83283]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:10 compute-0 sudo[83435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxrhtgqmeixwnkzjyjwfluexsydygkjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065650.3249757-301-209950780338195/AnsiballZ_stat.py'
Nov 25 10:14:10 compute-0 sudo[83435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:10 compute-0 python3.9[83437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:10 compute-0 sudo[83435]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:11 compute-0 sudo[83558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yphvgplywkralohlsprqfwfuvsytmspy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065650.3249757-301-209950780338195/AnsiballZ_copy.py'
Nov 25 10:14:11 compute-0 sudo[83558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:11 compute-0 python3.9[83560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065650.3249757-301-209950780338195/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=ebfb1b7ead53e10e15811d36286c0667f2300c69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:11 compute-0 sudo[83558]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:12 compute-0 sudo[83710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vihplpiobhdkfgsizwxznhjkeksbfyqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065651.8631918-301-154775750595845/AnsiballZ_stat.py'
Nov 25 10:14:12 compute-0 sudo[83710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:12 compute-0 python3.9[83712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:12 compute-0 sudo[83710]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:12 compute-0 sudo[83833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcjghfgugkykugyzdqbwdsibvxjsnpfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065651.8631918-301-154775750595845/AnsiballZ_copy.py'
Nov 25 10:14:12 compute-0 sudo[83833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:13 compute-0 python3.9[83835]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065651.8631918-301-154775750595845/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=b111a4442c4d292a54336cd28c7735293cd016ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:13 compute-0 sudo[83833]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:14 compute-0 sudo[83985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urbtzmruoyswalkfwfdjscnhzwtkhqzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065653.986258-361-197232818770609/AnsiballZ_file.py'
Nov 25 10:14:14 compute-0 sudo[83985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:14 compute-0 python3.9[83987]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:14 compute-0 sudo[83985]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:15 compute-0 sudo[84137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prisxlmvnssyrqfgguxvdwqotagqshfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065654.7373521-369-10197914664636/AnsiballZ_stat.py'
Nov 25 10:14:15 compute-0 sudo[84137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:15 compute-0 python3.9[84139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:15 compute-0 sudo[84137]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:15 compute-0 sudo[84260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjvywztqqotcbifritzoobwrodkacbob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065654.7373521-369-10197914664636/AnsiballZ_copy.py'
Nov 25 10:14:15 compute-0 sudo[84260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:16 compute-0 python3.9[84262]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065654.7373521-369-10197914664636/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:16 compute-0 sudo[84260]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:16 compute-0 sudo[84412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlscfhfthelmtovddedkxbbdndkmmavl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065656.252332-385-235999711478327/AnsiballZ_file.py'
Nov 25 10:14:16 compute-0 sudo[84412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:16 compute-0 python3.9[84414]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:16 compute-0 sudo[84412]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:17 compute-0 sudo[84564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkfzxvqgrcvwjiznvwjqxhjfjruaecol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065657.0663533-393-51335205655612/AnsiballZ_stat.py'
Nov 25 10:14:17 compute-0 sudo[84564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:17 compute-0 python3.9[84566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:17 compute-0 sudo[84564]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:18 compute-0 sudo[84687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwfcegzqtovszjigeghmsakcxhnwjpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065657.0663533-393-51335205655612/AnsiballZ_copy.py'
Nov 25 10:14:18 compute-0 sudo[84687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:18 compute-0 python3.9[84689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065657.0663533-393-51335205655612/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:18 compute-0 sudo[84687]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:18 compute-0 sudo[84839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnvsheuszbfaihobinjnctxrjmepotno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065658.5252807-409-69610510509675/AnsiballZ_file.py'
Nov 25 10:14:18 compute-0 sudo[84839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:19 compute-0 python3.9[84841]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:19 compute-0 sudo[84839]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:20 compute-0 sudo[84991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtgluzsujzljkcddziixvputgyfyvyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065660.0300164-417-150783741170505/AnsiballZ_stat.py'
Nov 25 10:14:20 compute-0 sudo[84991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:20 compute-0 python3.9[84993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:20 compute-0 sudo[84991]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:20 compute-0 sudo[85114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpundvhzjkcfrgvbytjzclisfriyynzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065660.0300164-417-150783741170505/AnsiballZ_copy.py'
Nov 25 10:14:20 compute-0 sudo[85114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:21 compute-0 python3.9[85116]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065660.0300164-417-150783741170505/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:21 compute-0 sudo[85114]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:21 compute-0 sudo[85266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxetskqvqfbgnmfnbogrmotkdkwfyckc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065661.3790631-433-279189207899895/AnsiballZ_file.py'
Nov 25 10:14:21 compute-0 sudo[85266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:21 compute-0 python3.9[85268]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:21 compute-0 sudo[85266]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:22 compute-0 sudo[85418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btshvugwjhvigznkiljxkbosenzyjmpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065662.1322634-441-176531093316362/AnsiballZ_stat.py'
Nov 25 10:14:22 compute-0 sudo[85418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:22 compute-0 python3.9[85420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:22 compute-0 sudo[85418]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:23 compute-0 sudo[85541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tamrxvpuwzbxocbckhyzsjhwuhvtueqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065662.1322634-441-176531093316362/AnsiballZ_copy.py'
Nov 25 10:14:23 compute-0 sudo[85541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:23 compute-0 python3.9[85543]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065662.1322634-441-176531093316362/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:23 compute-0 sudo[85541]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:23 compute-0 sudo[85693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpbeagbpszxpujizfhffoudjkvjhfmku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065663.544388-457-162489964991488/AnsiballZ_file.py'
Nov 25 10:14:23 compute-0 sudo[85693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:24 compute-0 python3.9[85695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:24 compute-0 sudo[85693]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:24 compute-0 sudo[85845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnblxuvazpiujtcpboloxikxqhoczpbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065664.2196383-465-47253148488774/AnsiballZ_stat.py'
Nov 25 10:14:24 compute-0 sudo[85845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:24 compute-0 python3.9[85847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:24 compute-0 sudo[85845]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:25 compute-0 sudo[85968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkurdmingyfzmthwlklidgbmnqjludsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065664.2196383-465-47253148488774/AnsiballZ_copy.py'
Nov 25 10:14:25 compute-0 sudo[85968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:25 compute-0 python3.9[85970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065664.2196383-465-47253148488774/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:25 compute-0 sudo[85968]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:25 compute-0 sudo[86120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfmmpsagriydspmscmujlxslqtcpkpeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065665.4824364-481-226511279708567/AnsiballZ_file.py'
Nov 25 10:14:25 compute-0 sudo[86120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:25 compute-0 python3.9[86122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:25 compute-0 sudo[86120]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:26 compute-0 sudo[86272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfjascgridaqecwzgmmfqwglfkfktdca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065666.14934-489-126697506486359/AnsiballZ_stat.py'
Nov 25 10:14:26 compute-0 sudo[86272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:26 compute-0 python3.9[86274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:26 compute-0 sudo[86272]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:26 compute-0 sudo[86395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhoqnyjufxfjtcmlwyobsqsqfagacopm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065666.14934-489-126697506486359/AnsiballZ_copy.py'
Nov 25 10:14:26 compute-0 sudo[86395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:27 compute-0 python3.9[86397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065666.14934-489-126697506486359/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:27 compute-0 sudo[86395]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:27 compute-0 sudo[86547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iruaejxtdpjlfjlxastoeayghafhfszv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065667.3032959-505-10925779651979/AnsiballZ_file.py'
Nov 25 10:14:27 compute-0 sudo[86547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:27 compute-0 python3.9[86549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:27 compute-0 sudo[86547]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:28 compute-0 sudo[86699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxeoscgsymaaoprnsstaveeagukalfil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065667.8866923-513-186495834833223/AnsiballZ_stat.py'
Nov 25 10:14:28 compute-0 sudo[86699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:28 compute-0 python3.9[86701]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:28 compute-0 sudo[86699]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:28 compute-0 sudo[86822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixdzmkavzcepxykknqdrhalluxixinkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065667.8866923-513-186495834833223/AnsiballZ_copy.py'
Nov 25 10:14:28 compute-0 sudo[86822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:28 compute-0 python3.9[86824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065667.8866923-513-186495834833223/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:28 compute-0 sudo[86822]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:29 compute-0 sudo[86974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhsinqbvuemxxvdzkpbnkkmmugayxwob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065669.035532-529-43916789852485/AnsiballZ_file.py'
Nov 25 10:14:29 compute-0 sudo[86974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:29 compute-0 python3.9[86976]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:29 compute-0 sudo[86974]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:29 compute-0 sudo[87126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eakaiigyuzlygbtuwifkkcmmhfokcjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065669.612416-537-250885114721447/AnsiballZ_stat.py'
Nov 25 10:14:29 compute-0 sudo[87126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:30 compute-0 python3.9[87128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:30 compute-0 sudo[87126]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:30 compute-0 sudo[87249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnumwzctwcxkiaistvvarwhuyrjkhuce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065669.612416-537-250885114721447/AnsiballZ_copy.py'
Nov 25 10:14:30 compute-0 sudo[87249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:30 compute-0 python3.9[87251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065669.612416-537-250885114721447/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=730519df0c6d8366514b26ec0fa8c8c9f56a8b7a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:30 compute-0 sudo[87249]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:30 compute-0 sshd-session[78036]: Connection closed by 192.168.122.30 port 45588
Nov 25 10:14:30 compute-0 sshd-session[78033]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:14:30 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 25 10:14:30 compute-0 systemd[1]: session-19.scope: Consumed 37.541s CPU time.
Nov 25 10:14:30 compute-0 systemd-logind[822]: Session 19 logged out. Waiting for processes to exit.
Nov 25 10:14:30 compute-0 systemd-logind[822]: Removed session 19.
Nov 25 10:14:36 compute-0 sshd-session[87277]: Accepted publickey for zuul from 192.168.122.30 port 53112 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:14:36 compute-0 systemd-logind[822]: New session 20 of user zuul.
Nov 25 10:14:36 compute-0 systemd[1]: Started Session 20 of User zuul.
Nov 25 10:14:36 compute-0 sshd-session[87277]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:14:37 compute-0 python3.9[87430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:14:38 compute-0 sudo[87584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idzddbjxlvguiipqeufofvsspfckzbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065678.2654128-34-71140227828681/AnsiballZ_file.py'
Nov 25 10:14:38 compute-0 sudo[87584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:38 compute-0 python3.9[87586]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:38 compute-0 sudo[87584]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:39 compute-0 sudo[87736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwkkoeulvugbuisfknlqqwazwplzavol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065679.0937254-34-128663983506839/AnsiballZ_file.py'
Nov 25 10:14:39 compute-0 sudo[87736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:39 compute-0 python3.9[87738]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:14:39 compute-0 sudo[87736]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:40 compute-0 python3.9[87888]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:14:40 compute-0 sudo[88038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdutuvslulbxhaoaittrcphhuihabbzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065680.4508379-57-238064402286893/AnsiballZ_seboolean.py'
Nov 25 10:14:40 compute-0 sudo[88038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:41 compute-0 python3.9[88040]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 10:14:42 compute-0 sudo[88038]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:43 compute-0 sudo[88194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmtvntbifrnqnggbohqvynrfqazcfnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065682.7971802-67-87122665351366/AnsiballZ_setup.py'
Nov 25 10:14:43 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 25 10:14:43 compute-0 sudo[88194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:43 compute-0 python3.9[88196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:14:43 compute-0 sudo[88194]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:44 compute-0 sudo[88280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjjcjqgtljmnlpoaltlxkvvwazhvceia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065682.7971802-67-87122665351366/AnsiballZ_dnf.py'
Nov 25 10:14:44 compute-0 sudo[88280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:44 compute-0 python3.9[88282]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:14:45 compute-0 sshd-session[88205]: Connection closed by authenticating user root 171.244.51.45 port 44860 [preauth]
Nov 25 10:14:45 compute-0 sudo[88280]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:46 compute-0 sudo[88433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsnlobhjzzvkifjyeykhltytcjlrbsmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065685.8048928-79-54194759863311/AnsiballZ_systemd.py'
Nov 25 10:14:46 compute-0 sudo[88433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:46 compute-0 python3.9[88435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:14:46 compute-0 sudo[88433]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:47 compute-0 sudo[88588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pujhphzouppzctmcpcnipjcojsatiiuz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764065686.95284-87-183185286177218/AnsiballZ_edpm_nftables_snippet.py'
Nov 25 10:14:47 compute-0 sudo[88588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:47 compute-0 python3[88590]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 25 10:14:47 compute-0 sudo[88588]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:48 compute-0 sudo[88740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqfpsarxntthtnysbiehfytvsnaenuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065687.8688304-96-259171733935035/AnsiballZ_file.py'
Nov 25 10:14:48 compute-0 sudo[88740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:48 compute-0 python3.9[88742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:48 compute-0 sudo[88740]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:48 compute-0 sudo[88892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcgcnvjaxmvdcawneuglvprcylftwimd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065688.4935777-104-268270866229077/AnsiballZ_stat.py'
Nov 25 10:14:48 compute-0 sudo[88892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:49 compute-0 python3.9[88894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:49 compute-0 sudo[88892]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:49 compute-0 sudo[88970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzspvjthczrzdhihaheyiltdocauyzdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065688.4935777-104-268270866229077/AnsiballZ_file.py'
Nov 25 10:14:49 compute-0 sudo[88970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:49 compute-0 python3.9[88972]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:49 compute-0 sudo[88970]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:49 compute-0 sudo[89122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrbgjxycouyqabqksgzrknaiiqohmmqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065689.6785994-116-177454600169629/AnsiballZ_stat.py'
Nov 25 10:14:49 compute-0 sudo[89122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:50 compute-0 python3.9[89124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:50 compute-0 sudo[89122]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:50 compute-0 sudo[89200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jklkbkrqfcokeqgoyaxdqhuswscxmjmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065689.6785994-116-177454600169629/AnsiballZ_file.py'
Nov 25 10:14:50 compute-0 sudo[89200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:50 compute-0 python3.9[89202]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8jodvkim recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:50 compute-0 sudo[89200]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:51 compute-0 sudo[89352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooeyyaoxslnmeeyzwqgxuuytijetuhwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065690.762337-128-110895188447060/AnsiballZ_stat.py'
Nov 25 10:14:51 compute-0 sudo[89352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:51 compute-0 python3.9[89354]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:51 compute-0 sudo[89352]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:51 compute-0 sudo[89430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qicctayrneknvvbpqfmjvedkhghhvdfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065690.762337-128-110895188447060/AnsiballZ_file.py'
Nov 25 10:14:51 compute-0 sudo[89430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:51 compute-0 python3.9[89432]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:51 compute-0 sudo[89430]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:52 compute-0 sudo[89582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jclpaihxhrjnatbezdzkttccwhsfhzii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065692.0185592-141-195067100155847/AnsiballZ_command.py'
Nov 25 10:14:52 compute-0 sudo[89582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:52 compute-0 python3.9[89584]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:14:52 compute-0 sudo[89582]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:53 compute-0 sudo[89735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuhxfaggpfhnelbodpvimfpghsokzeax ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764065692.854017-149-258447638350805/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 10:14:53 compute-0 sudo[89735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:53 compute-0 python3[89737]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 10:14:53 compute-0 sudo[89735]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:53 compute-0 sudo[89887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oahapzdlsifdvbggmbhnzljjcbfknssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065693.6972897-157-7473761587942/AnsiballZ_stat.py'
Nov 25 10:14:53 compute-0 sudo[89887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:54 compute-0 python3.9[89889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:54 compute-0 sudo[89887]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:54 compute-0 sudo[90012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybuhqviygfuzoqwealimoozyyvdbvgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065693.6972897-157-7473761587942/AnsiballZ_copy.py'
Nov 25 10:14:54 compute-0 sudo[90012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:55 compute-0 python3.9[90014]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065693.6972897-157-7473761587942/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:55 compute-0 sudo[90012]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:55 compute-0 sudo[90164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzeyqjtnzrsgirihdayzugjqxupoewky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065695.3090866-172-246428591756709/AnsiballZ_stat.py'
Nov 25 10:14:55 compute-0 sudo[90164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:55 compute-0 python3.9[90166]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:55 compute-0 sudo[90164]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:56 compute-0 sudo[90289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzcbveagmmcwujiknvvcpwawegmffful ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065695.3090866-172-246428591756709/AnsiballZ_copy.py'
Nov 25 10:14:56 compute-0 sudo[90289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:56 compute-0 python3.9[90291]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065695.3090866-172-246428591756709/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:56 compute-0 sudo[90289]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:56 compute-0 sudo[90441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlzrbahgxstluscrtuyjpccbdpvqqzbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065696.6787689-187-11792523622038/AnsiballZ_stat.py'
Nov 25 10:14:56 compute-0 sudo[90441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:57 compute-0 python3.9[90443]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:57 compute-0 sudo[90441]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:57 compute-0 sudo[90566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdjkozhxnepbiajsisfmzqepufkdrzxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065696.6787689-187-11792523622038/AnsiballZ_copy.py'
Nov 25 10:14:57 compute-0 sudo[90566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:57 compute-0 python3.9[90568]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065696.6787689-187-11792523622038/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:57 compute-0 sudo[90566]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:58 compute-0 sudo[90718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xytybwfkkoosmsiwgzdvdmyztalanfeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065697.9930093-202-160847360770896/AnsiballZ_stat.py'
Nov 25 10:14:58 compute-0 sudo[90718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:58 compute-0 python3.9[90720]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:58 compute-0 sudo[90718]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:58 compute-0 sudo[90843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isafwfussvyyhmryyzfgcpsifcfaqkgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065697.9930093-202-160847360770896/AnsiballZ_copy.py'
Nov 25 10:14:58 compute-0 sudo[90843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:59 compute-0 python3.9[90845]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065697.9930093-202-160847360770896/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:14:59 compute-0 sudo[90843]: pam_unix(sudo:session): session closed for user root
Nov 25 10:14:59 compute-0 sudo[90995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcgvjjvwzluxnqkuzkucijnvsvpmyfxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065699.2280905-217-87748614040508/AnsiballZ_stat.py'
Nov 25 10:14:59 compute-0 sudo[90995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:14:59 compute-0 python3.9[90997]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:14:59 compute-0 sudo[90995]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:00 compute-0 sudo[91120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eedvukygtzoqtplsfrdmcztvjmqmajph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065699.2280905-217-87748614040508/AnsiballZ_copy.py'
Nov 25 10:15:00 compute-0 sudo[91120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:00 compute-0 python3.9[91122]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764065699.2280905-217-87748614040508/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:00 compute-0 sudo[91120]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:01 compute-0 sudo[91272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjcppjfkhrammbrpmdorgwsnatnmfhyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065700.7606308-232-210551310967732/AnsiballZ_file.py'
Nov 25 10:15:01 compute-0 sudo[91272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:01 compute-0 python3.9[91274]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:01 compute-0 sudo[91272]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:01 compute-0 sudo[91424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdifdxgbsxssrpgjfqkuxcperhdxrbfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065701.4457903-240-24928992105438/AnsiballZ_command.py'
Nov 25 10:15:01 compute-0 sudo[91424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:01 compute-0 python3.9[91426]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:01 compute-0 sudo[91424]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:02 compute-0 sudo[91579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzcqmxzbkooqdrkwohueaardjltjdfgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065702.118431-248-108432236480838/AnsiballZ_blockinfile.py'
Nov 25 10:15:02 compute-0 sudo[91579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:02 compute-0 python3.9[91581]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:02 compute-0 sudo[91579]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:03 compute-0 sudo[91731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bafstfuvuczkckzcshklehigyhmogxdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065703.0101573-257-152687588028956/AnsiballZ_command.py'
Nov 25 10:15:03 compute-0 sudo[91731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:03 compute-0 python3.9[91733]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:03 compute-0 sudo[91731]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:03 compute-0 sudo[91885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdkgabloqlybshtorlehzfblkraxljxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065703.6626594-265-36140869371481/AnsiballZ_stat.py'
Nov 25 10:15:03 compute-0 sudo[91885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:04 compute-0 python3.9[91887]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:15:04 compute-0 sudo[91885]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:04 compute-0 sudo[92039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqzbqonodyyieuailuctjmxdhczphjyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065704.305767-273-1185026636246/AnsiballZ_command.py'
Nov 25 10:15:04 compute-0 sudo[92039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:04 compute-0 python3.9[92041]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:04 compute-0 sudo[92039]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:05 compute-0 sudo[92194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipqxwwseebczjknwvususqblnaavuvbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065704.9762723-281-21410397023806/AnsiballZ_file.py'
Nov 25 10:15:05 compute-0 sudo[92194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:05 compute-0 python3.9[92196]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:05 compute-0 sudo[92194]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:07 compute-0 python3.9[92346]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:15:08 compute-0 sudo[92497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urvoufsxjnzsebjyxiosofcthijvjaal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065708.2621076-321-94020377214069/AnsiballZ_command.py'
Nov 25 10:15:08 compute-0 sudo[92497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:08 compute-0 python3.9[92499]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:08 compute-0 ovs-vsctl[92500]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 25 10:15:08 compute-0 sudo[92497]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:09 compute-0 sudo[92650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehhqqvvceedmhhhkzgnglugmqiprpehb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065709.0205925-330-131732629119134/AnsiballZ_command.py'
Nov 25 10:15:09 compute-0 sudo[92650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:09 compute-0 python3.9[92652]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:09 compute-0 sudo[92650]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:10 compute-0 sudo[92805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcpmyvhtdevxbpxqvahhtjwtujisqbmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065709.7539313-338-126784180961950/AnsiballZ_command.py'
Nov 25 10:15:10 compute-0 sudo[92805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:10 compute-0 python3.9[92807]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:10 compute-0 ovs-vsctl[92808]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 25 10:15:10 compute-0 sudo[92805]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:10 compute-0 python3.9[92958]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:15:11 compute-0 sudo[93110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljvrtwczqujvmoxxqojtadypdlognigy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065711.162651-355-232662146341063/AnsiballZ_file.py'
Nov 25 10:15:11 compute-0 sudo[93110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:11 compute-0 python3.9[93112]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:11 compute-0 sudo[93110]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:12 compute-0 sudo[93262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvalsqflipmceoygznlnggxsgkurtvon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065711.9524524-363-4474473647471/AnsiballZ_stat.py'
Nov 25 10:15:12 compute-0 sudo[93262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:12 compute-0 python3.9[93264]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:12 compute-0 sudo[93262]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:12 compute-0 sudo[93340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvuieheyvqlfwfvpunacpaqiasxabken ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065711.9524524-363-4474473647471/AnsiballZ_file.py'
Nov 25 10:15:12 compute-0 sudo[93340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:13 compute-0 python3.9[93342]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:13 compute-0 sudo[93340]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:13 compute-0 sudo[93492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfjexhxnvtlblcgvahcknoabkqxumxyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065713.2199264-363-21442008139759/AnsiballZ_stat.py'
Nov 25 10:15:13 compute-0 sudo[93492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:13 compute-0 python3.9[93494]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:13 compute-0 sudo[93492]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:14 compute-0 sudo[93570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vznwecafyxqzizdtfebyugkhhgcdyogc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065713.2199264-363-21442008139759/AnsiballZ_file.py'
Nov 25 10:15:14 compute-0 sudo[93570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:14 compute-0 python3.9[93572]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:14 compute-0 sudo[93570]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:14 compute-0 sudo[93722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clngqqfkjdyutxubcflzspaoutimmnhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065714.4363172-386-62027144041514/AnsiballZ_file.py'
Nov 25 10:15:14 compute-0 sudo[93722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:15 compute-0 python3.9[93724]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:15 compute-0 sudo[93722]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:15 compute-0 sudo[93874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljhnbedraldercxjvxiqotqtpiajytdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065715.196491-394-75130558123045/AnsiballZ_stat.py'
Nov 25 10:15:15 compute-0 sudo[93874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:15 compute-0 python3.9[93876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:15 compute-0 sudo[93874]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:16 compute-0 sudo[93952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbijqncscknkqxomigxjwhxkbhzwakpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065715.196491-394-75130558123045/AnsiballZ_file.py'
Nov 25 10:15:16 compute-0 sudo[93952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:16 compute-0 python3.9[93954]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:16 compute-0 sudo[93952]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:16 compute-0 sudo[94104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgohqvkocfpebuhqkrmfgtddtonrglwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065716.393003-406-195719180979285/AnsiballZ_stat.py'
Nov 25 10:15:16 compute-0 sudo[94104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:16 compute-0 python3.9[94106]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:16 compute-0 sudo[94104]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:17 compute-0 sudo[94182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnsbdppdrchebbcgofjecikxeznogeeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065716.393003-406-195719180979285/AnsiballZ_file.py'
Nov 25 10:15:17 compute-0 sudo[94182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:17 compute-0 python3.9[94184]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:17 compute-0 sudo[94182]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:17 compute-0 sudo[94334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfltysddhmzjhjeztlxteldicjprthyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065717.6118865-418-211023216051065/AnsiballZ_systemd.py'
Nov 25 10:15:17 compute-0 sudo[94334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:18 compute-0 python3.9[94336]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:15:18 compute-0 systemd[1]: Reloading.
Nov 25 10:15:18 compute-0 systemd-rc-local-generator[94357]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:15:18 compute-0 systemd-sysv-generator[94362]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:15:18 compute-0 sudo[94334]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:19 compute-0 sudo[94522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaxfaeuyubocwortuffhvmfxpwaheohr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065718.731576-426-123232145747768/AnsiballZ_stat.py'
Nov 25 10:15:19 compute-0 sudo[94522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:19 compute-0 python3.9[94524]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:19 compute-0 sudo[94522]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:19 compute-0 sudo[94600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pokcvnbzgcjxvrklwdoowamfnvcxbdpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065718.731576-426-123232145747768/AnsiballZ_file.py'
Nov 25 10:15:19 compute-0 sudo[94600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:19 compute-0 python3.9[94602]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:19 compute-0 sudo[94600]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:20 compute-0 sudo[94752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmjezbrxitqockyykoxyihuzdwmovtyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065719.9723983-438-268744832393637/AnsiballZ_stat.py'
Nov 25 10:15:20 compute-0 sudo[94752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:20 compute-0 python3.9[94754]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:20 compute-0 sudo[94752]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:20 compute-0 sudo[94831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrtgdejasfsgxmapcdsopunrcqcmgjms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065719.9723983-438-268744832393637/AnsiballZ_file.py'
Nov 25 10:15:20 compute-0 sudo[94831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:20 compute-0 python3.9[94833]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:20 compute-0 sudo[94831]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:21 compute-0 sudo[94983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yflnecbeftoxtmhntxopcyzwmhtceokz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065721.075579-450-39554692900462/AnsiballZ_systemd.py'
Nov 25 10:15:21 compute-0 sudo[94983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:21 compute-0 python3.9[94985]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:15:21 compute-0 systemd[1]: Reloading.
Nov 25 10:15:21 compute-0 systemd-rc-local-generator[95011]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:15:21 compute-0 systemd-sysv-generator[95016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:15:21 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 10:15:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 10:15:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 10:15:21 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 10:15:21 compute-0 sudo[94983]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:22 compute-0 sudo[95176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szcbeipzkmmnjmcuftkbrdxkgdvjlyfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065722.2474906-460-32944842394776/AnsiballZ_file.py'
Nov 25 10:15:22 compute-0 sudo[95176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:22 compute-0 python3.9[95178]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:22 compute-0 sudo[95176]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:23 compute-0 sudo[95328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivbhtrluvfijugsgragawlkgxpagprer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065722.986658-468-266852946873963/AnsiballZ_stat.py'
Nov 25 10:15:23 compute-0 sudo[95328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:23 compute-0 python3.9[95330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:23 compute-0 sudo[95328]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:23 compute-0 sudo[95451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgprwmttaqyroprwnnmjxywjawmbkdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065722.986658-468-266852946873963/AnsiballZ_copy.py'
Nov 25 10:15:23 compute-0 sudo[95451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:24 compute-0 python3.9[95453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065722.986658-468-266852946873963/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:24 compute-0 sudo[95451]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:24 compute-0 sudo[95603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbngnyunkexuxaaysnbtaorrrirgnuck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065724.5945618-485-63766089472290/AnsiballZ_file.py'
Nov 25 10:15:24 compute-0 sudo[95603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:25 compute-0 python3.9[95605]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:25 compute-0 sudo[95603]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:25 compute-0 sudo[95755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yembjkpokykbkdpvpgvahdskvlpgpsbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065725.3514-493-178996736685846/AnsiballZ_stat.py'
Nov 25 10:15:25 compute-0 sudo[95755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:25 compute-0 python3.9[95757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:25 compute-0 sudo[95755]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:26 compute-0 sudo[95878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlltocvadrqqhjhbzvfaranhxezotyfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065725.3514-493-178996736685846/AnsiballZ_copy.py'
Nov 25 10:15:26 compute-0 sudo[95878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:26 compute-0 python3.9[95880]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065725.3514-493-178996736685846/.source.json _original_basename=.fw96429h follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:26 compute-0 sudo[95878]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:26 compute-0 sudo[96030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfygtcbegsquwoyugnnkjlhufdcnkoad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065726.55742-508-99361541194235/AnsiballZ_file.py'
Nov 25 10:15:26 compute-0 sudo[96030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:27 compute-0 python3.9[96032]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:27 compute-0 sudo[96030]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:27 compute-0 sudo[96182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfyepequfccjfkpbbjhwcwoduvardkge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065727.279948-516-192542487884094/AnsiballZ_stat.py'
Nov 25 10:15:27 compute-0 sudo[96182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:27 compute-0 sudo[96182]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:28 compute-0 sudo[96305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqbyijhoitpdcjyjgxgmogihwpbjicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065727.279948-516-192542487884094/AnsiballZ_copy.py'
Nov 25 10:15:28 compute-0 sudo[96305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:28 compute-0 sudo[96305]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:29 compute-0 sudo[96457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kovdxznnejixxlpemlurpufpsvybdsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065728.7610211-533-82454335096171/AnsiballZ_container_config_data.py'
Nov 25 10:15:29 compute-0 sudo[96457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:29 compute-0 python3.9[96459]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 25 10:15:29 compute-0 sudo[96457]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:30 compute-0 sudo[96609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drbseyqyjtyrzydzfgqhusetpuyldpve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065729.7780907-542-41697350512065/AnsiballZ_container_config_hash.py'
Nov 25 10:15:30 compute-0 sudo[96609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:30 compute-0 python3.9[96611]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:15:30 compute-0 sudo[96609]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:31 compute-0 sudo[96761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnjvehaurmvpkqevyowmjmbchjqfnvie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065730.7183383-551-29221769871753/AnsiballZ_podman_container_info.py'
Nov 25 10:15:31 compute-0 sudo[96761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:31 compute-0 python3.9[96763]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 10:15:31 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:15:31 compute-0 sudo[96761]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:32 compute-0 sudo[96924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxdsvuvtnbwrdaimhxcswnlvjbjxjpxn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764065732.008399-564-151001819726655/AnsiballZ_edpm_container_manage.py'
Nov 25 10:15:32 compute-0 sudo[96924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:33 compute-0 python3[96926]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:15:33 compute-0 podman[96965]: 2025-11-25 10:15:33.314894943 +0000 UTC m=+0.029265810 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 10:15:33 compute-0 podman[96965]: 2025-11-25 10:15:33.469016531 +0000 UTC m=+0.183387348 container create 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller)
Nov 25 10:15:33 compute-0 python3[96926]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 25 10:15:33 compute-0 sudo[96924]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 25 10:15:34 compute-0 sudo[97153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmwpjdwlxrocztgtmtdkmmyjaswulvjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065733.8687565-572-128807971849762/AnsiballZ_stat.py'
Nov 25 10:15:34 compute-0 sudo[97153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:34 compute-0 python3.9[97155]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:15:34 compute-0 sudo[97153]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:34 compute-0 sudo[97307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynkbfhaobwdqhzqdeajprxdvoozasdnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065734.720243-581-194869939839579/AnsiballZ_file.py'
Nov 25 10:15:34 compute-0 sudo[97307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:35 compute-0 python3.9[97309]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:35 compute-0 sudo[97307]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:35 compute-0 sudo[97383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmjolfobnxjibzazyrsscjwwzvtswhwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065734.720243-581-194869939839579/AnsiballZ_stat.py'
Nov 25 10:15:35 compute-0 sudo[97383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:35 compute-0 python3.9[97385]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:15:35 compute-0 sudo[97383]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:36 compute-0 sudo[97534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyswkfznsbqcrcjwtpxadsamvgbngqvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065735.6750755-581-240658976933618/AnsiballZ_copy.py'
Nov 25 10:15:36 compute-0 sudo[97534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:36 compute-0 python3.9[97536]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764065735.6750755-581-240658976933618/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:15:36 compute-0 sudo[97534]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:36 compute-0 sudo[97610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uidoxpmdafdcipfzxabsgxohekdqxugb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065735.6750755-581-240658976933618/AnsiballZ_systemd.py'
Nov 25 10:15:36 compute-0 sudo[97610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:36 compute-0 python3.9[97612]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:15:36 compute-0 systemd[1]: Reloading.
Nov 25 10:15:36 compute-0 systemd-sysv-generator[97638]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:15:36 compute-0 systemd-rc-local-generator[97635]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:15:37 compute-0 sudo[97610]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:37 compute-0 sudo[97722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkkymejfhfjpqbzhgugsfqapoznyvztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065735.6750755-581-240658976933618/AnsiballZ_systemd.py'
Nov 25 10:15:37 compute-0 sudo[97722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:37 compute-0 python3.9[97724]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:15:37 compute-0 systemd[1]: Reloading.
Nov 25 10:15:37 compute-0 systemd-sysv-generator[97756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:15:37 compute-0 systemd-rc-local-generator[97753]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:15:37 compute-0 systemd[1]: Starting ovn_controller container...
Nov 25 10:15:38 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 25 10:15:38 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035ee4325c4eedb4a6cc8bf1ab11ef42bbf62a83ac05e23731c7c8fcd7a3d3ba/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 10:15:38 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.
Nov 25 10:15:38 compute-0 podman[97764]: 2025-11-25 10:15:38.057667553 +0000 UTC m=+0.105722485 container init 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + sudo -E kolla_set_configs
Nov 25 10:15:38 compute-0 podman[97764]: 2025-11-25 10:15:38.081929853 +0000 UTC m=+0.129984765 container start 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:15:38 compute-0 edpm-start-podman-container[97764]: ovn_controller
Nov 25 10:15:38 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 25 10:15:38 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 25 10:15:38 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 25 10:15:38 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 25 10:15:38 compute-0 edpm-start-podman-container[97763]: Creating additional drop-in dependency for "ovn_controller" (5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022)
Nov 25 10:15:38 compute-0 systemd[97809]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 25 10:15:38 compute-0 systemd[1]: Reloading.
Nov 25 10:15:38 compute-0 podman[97785]: 2025-11-25 10:15:38.172354281 +0000 UTC m=+0.078764769 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:15:38 compute-0 systemd-sysv-generator[97869]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:15:38 compute-0 systemd-rc-local-generator[97864]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:15:38 compute-0 systemd[97809]: Queued start job for default target Main User Target.
Nov 25 10:15:38 compute-0 systemd[97809]: Created slice User Application Slice.
Nov 25 10:15:38 compute-0 systemd[97809]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 25 10:15:38 compute-0 systemd[97809]: Started Daily Cleanup of User's Temporary Directories.
Nov 25 10:15:38 compute-0 systemd[97809]: Reached target Paths.
Nov 25 10:15:38 compute-0 systemd[97809]: Reached target Timers.
Nov 25 10:15:38 compute-0 systemd[97809]: Starting D-Bus User Message Bus Socket...
Nov 25 10:15:38 compute-0 systemd[97809]: Starting Create User's Volatile Files and Directories...
Nov 25 10:15:38 compute-0 systemd[97809]: Listening on D-Bus User Message Bus Socket.
Nov 25 10:15:38 compute-0 systemd[97809]: Reached target Sockets.
Nov 25 10:15:38 compute-0 systemd[97809]: Finished Create User's Volatile Files and Directories.
Nov 25 10:15:38 compute-0 systemd[97809]: Reached target Basic System.
Nov 25 10:15:38 compute-0 systemd[97809]: Reached target Main User Target.
Nov 25 10:15:38 compute-0 systemd[97809]: Startup finished in 133ms.
Nov 25 10:15:38 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 25 10:15:38 compute-0 systemd[1]: 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022-1cb9be1a20e69c8b.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:15:38 compute-0 systemd[1]: 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022-1cb9be1a20e69c8b.service: Failed with result 'exit-code'.
Nov 25 10:15:38 compute-0 systemd[1]: Started ovn_controller container.
Nov 25 10:15:38 compute-0 systemd[1]: Started Session c1 of User root.
Nov 25 10:15:38 compute-0 sudo[97722]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:38 compute-0 ovn_controller[97779]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:15:38 compute-0 ovn_controller[97779]: INFO:__main__:Validating config file
Nov 25 10:15:38 compute-0 ovn_controller[97779]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:15:38 compute-0 ovn_controller[97779]: INFO:__main__:Writing out command to execute
Nov 25 10:15:38 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: ++ cat /run_command
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + ARGS=
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + sudo kolla_copy_cacerts
Nov 25 10:15:38 compute-0 systemd[1]: Started Session c2 of User root.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + [[ ! -n '' ]]
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + . kolla_extend_start
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 25 10:15:38 compute-0 ovn_controller[97779]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + umask 0022
Nov 25 10:15:38 compute-0 ovn_controller[97779]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 25 10:15:38 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5011] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5018] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5030] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5036] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5039] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 10:15:38 compute-0 kernel: br-int: entered promiscuous mode
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00024|main|INFO|OVS feature set changed, force recompute.
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 10:15:38 compute-0 ovn_controller[97779]: 2025-11-25T10:15:38Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5273] manager: (ovn-b1977f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 25 10:15:38 compute-0 systemd-udevd[97922]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:15:38 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 25 10:15:38 compute-0 systemd-udevd[97932]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5463] device (genev_sys_6081): carrier: link connected
Nov 25 10:15:38 compute-0 NetworkManager[56317]: <info>  [1764065738.5467] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Nov 25 10:15:38 compute-0 sudo[98045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skadkcjlzvuluwhplbddixbyemqteuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065738.566315-609-185383622501826/AnsiballZ_command.py'
Nov 25 10:15:38 compute-0 sudo[98045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:39 compute-0 python3.9[98047]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:39 compute-0 ovs-vsctl[98048]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 25 10:15:39 compute-0 sudo[98045]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:39 compute-0 sudo[98198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsjjzvjatatjmtyodoeaujprcjfmopt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065739.1978486-617-57653980710754/AnsiballZ_command.py'
Nov 25 10:15:39 compute-0 sudo[98198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:39 compute-0 python3.9[98200]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:39 compute-0 ovs-vsctl[98202]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 25 10:15:39 compute-0 sudo[98198]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:40 compute-0 sudo[98353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-momlkdqpcyezbxwgeaehsdazzyiijeit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065740.0105042-631-263626552452435/AnsiballZ_command.py'
Nov 25 10:15:40 compute-0 sudo[98353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:40 compute-0 python3.9[98355]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:15:40 compute-0 ovs-vsctl[98356]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 25 10:15:40 compute-0 sudo[98353]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:40 compute-0 sshd-session[87280]: Connection closed by 192.168.122.30 port 53112
Nov 25 10:15:40 compute-0 sshd-session[87277]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:15:40 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Nov 25 10:15:40 compute-0 systemd[1]: session-20.scope: Consumed 47.762s CPU time.
Nov 25 10:15:40 compute-0 systemd-logind[822]: Session 20 logged out. Waiting for processes to exit.
Nov 25 10:15:40 compute-0 systemd-logind[822]: Removed session 20.
Nov 25 10:15:47 compute-0 sshd-session[98381]: Accepted publickey for zuul from 192.168.122.30 port 45986 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:15:47 compute-0 systemd-logind[822]: New session 22 of user zuul.
Nov 25 10:15:47 compute-0 systemd[1]: Started Session 22 of User zuul.
Nov 25 10:15:47 compute-0 sshd-session[98381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:15:48 compute-0 python3.9[98534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:15:48 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 25 10:15:48 compute-0 systemd[97809]: Activating special unit Exit the Session...
Nov 25 10:15:48 compute-0 systemd[97809]: Stopped target Main User Target.
Nov 25 10:15:48 compute-0 systemd[97809]: Stopped target Basic System.
Nov 25 10:15:48 compute-0 systemd[97809]: Stopped target Paths.
Nov 25 10:15:48 compute-0 systemd[97809]: Stopped target Sockets.
Nov 25 10:15:48 compute-0 systemd[97809]: Stopped target Timers.
Nov 25 10:15:48 compute-0 systemd[97809]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 25 10:15:48 compute-0 systemd[97809]: Closed D-Bus User Message Bus Socket.
Nov 25 10:15:48 compute-0 systemd[97809]: Stopped Create User's Volatile Files and Directories.
Nov 25 10:15:48 compute-0 systemd[97809]: Removed slice User Application Slice.
Nov 25 10:15:48 compute-0 systemd[97809]: Reached target Shutdown.
Nov 25 10:15:48 compute-0 systemd[97809]: Finished Exit the Session.
Nov 25 10:15:48 compute-0 systemd[97809]: Reached target Exit the Session.
Nov 25 10:15:48 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 25 10:15:48 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 25 10:15:48 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 25 10:15:48 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 25 10:15:48 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 25 10:15:48 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 25 10:15:48 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 25 10:15:49 compute-0 sudo[98691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdvvaqirhoakexpgegonojbblxjkeppg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065748.673618-34-224380002938577/AnsiballZ_file.py'
Nov 25 10:15:49 compute-0 sudo[98691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:49 compute-0 python3.9[98693]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:49 compute-0 sudo[98691]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:49 compute-0 sudo[98843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxwasmgfzqsggxbvtphknvatapuytnjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065749.544272-34-5317420832033/AnsiballZ_file.py'
Nov 25 10:15:49 compute-0 sudo[98843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:50 compute-0 python3.9[98845]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:50 compute-0 sudo[98843]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:50 compute-0 sudo[98995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wigqysmwcxovbwnkbsqichbzdfvqohet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065750.3430505-34-93109961316561/AnsiballZ_file.py'
Nov 25 10:15:50 compute-0 sudo[98995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:50 compute-0 python3.9[98997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:50 compute-0 sudo[98995]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:51 compute-0 sudo[99147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lodrdobfglcsqyhtgofvstsgytnxoxmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065750.9481988-34-80474457628763/AnsiballZ_file.py'
Nov 25 10:15:51 compute-0 sudo[99147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:51 compute-0 python3.9[99149]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:51 compute-0 sudo[99147]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:51 compute-0 sudo[99299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybutskiijepbgwbbezvzkxsdahldjzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065751.552631-34-23682923737954/AnsiballZ_file.py'
Nov 25 10:15:51 compute-0 sudo[99299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:51 compute-0 python3.9[99301]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:51 compute-0 sudo[99299]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:52 compute-0 python3.9[99451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:15:53 compute-0 sudo[99601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbaudvprnyegzbtnhxpkgsrgwvorvknb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065752.9074757-78-69406891091500/AnsiballZ_seboolean.py'
Nov 25 10:15:53 compute-0 sudo[99601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:53 compute-0 python3.9[99603]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 25 10:15:54 compute-0 sudo[99601]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:54 compute-0 python3.9[99753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:55 compute-0 python3.9[99874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065754.311446-86-176062705044438/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:56 compute-0 python3.9[100024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:15:56 compute-0 python3.9[100146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065755.958047-101-92172362109387/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:15:57 compute-0 sudo[100296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfyzaeoyzewtgjyskugtnrjjafzqansg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065757.2593803-118-46452533505518/AnsiballZ_setup.py'
Nov 25 10:15:57 compute-0 sudo[100296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:57 compute-0 python3.9[100298]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:15:58 compute-0 sudo[100296]: pam_unix(sudo:session): session closed for user root
Nov 25 10:15:58 compute-0 sudo[100380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmjfnariujavyjwzcbxeuzlouezdjfvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065757.2593803-118-46452533505518/AnsiballZ_dnf.py'
Nov 25 10:15:58 compute-0 sudo[100380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:15:58 compute-0 python3.9[100382]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:16:00 compute-0 sudo[100380]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:00 compute-0 sudo[100533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiwbvsypzdcopiyyjvsccgfsfgfcwhzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065760.175321-130-25354792761522/AnsiballZ_systemd.py'
Nov 25 10:16:00 compute-0 sudo[100533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:01 compute-0 python3.9[100535]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:16:01 compute-0 sudo[100533]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:01 compute-0 python3.9[100688]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:02 compute-0 python3.9[100809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065761.273513-138-135576223594361/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:02 compute-0 python3.9[100959]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:03 compute-0 python3.9[101080]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065762.4089506-138-4097951653940/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:04 compute-0 python3.9[101230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:05 compute-0 python3.9[101351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065764.090591-182-137115243951612/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:05 compute-0 python3.9[101501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:06 compute-0 python3.9[101622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065765.2656562-182-108854281462633/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:07 compute-0 python3.9[101772]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:16:07 compute-0 sudo[101924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypeekhwgpscnszddtkpoohritcaexca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065767.271731-220-19839756001268/AnsiballZ_file.py'
Nov 25 10:16:07 compute-0 sudo[101924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:07 compute-0 python3.9[101926]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:07 compute-0 sudo[101924]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:08 compute-0 sudo[102087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeymwwogyferabrpcjmhgkqjqqxnkatg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065768.2416627-228-17470346725501/AnsiballZ_stat.py'
Nov 25 10:16:08 compute-0 sudo[102087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:08 compute-0 ovn_controller[97779]: 2025-11-25T10:16:08Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Nov 25 10:16:08 compute-0 ovn_controller[97779]: 2025-11-25T10:16:08Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 25 10:16:08 compute-0 podman[102050]: 2025-11-25 10:16:08.637259072 +0000 UTC m=+0.098317361 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:16:08 compute-0 python3.9[102093]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:08 compute-0 sudo[102087]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:09 compute-0 sudo[102178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfylotapooqnypbuqlccyqdmovhzaqof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065768.2416627-228-17470346725501/AnsiballZ_file.py'
Nov 25 10:16:09 compute-0 sudo[102178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:09 compute-0 python3.9[102180]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:09 compute-0 sudo[102178]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:09 compute-0 sudo[102330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mteaevzbpdawftnbwffqkflxvxegzmme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065769.3542814-228-65166920030688/AnsiballZ_stat.py'
Nov 25 10:16:09 compute-0 sudo[102330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:09 compute-0 python3.9[102332]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:09 compute-0 sudo[102330]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:10 compute-0 sudo[102408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtskkzdihxgkuvkwjvmyfoaxxlecvhxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065769.3542814-228-65166920030688/AnsiballZ_file.py'
Nov 25 10:16:10 compute-0 sudo[102408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:10 compute-0 python3.9[102410]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:10 compute-0 sudo[102408]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:10 compute-0 sudo[102560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yucggnjwwmkyhdxhtdbczkiuxyaxxdpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065770.5992718-251-238371245443279/AnsiballZ_file.py'
Nov 25 10:16:10 compute-0 sudo[102560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:11 compute-0 python3.9[102562]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:11 compute-0 sudo[102560]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:11 compute-0 sudo[102712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybchphmrkgjuurnjzunldruoyljccthm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065771.212533-259-30531328806474/AnsiballZ_stat.py'
Nov 25 10:16:11 compute-0 sudo[102712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:11 compute-0 python3.9[102714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:11 compute-0 sudo[102712]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:11 compute-0 sudo[102790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sszbbtyyuvrjwzmkpnrywhbigjmlbqzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065771.212533-259-30531328806474/AnsiballZ_file.py'
Nov 25 10:16:11 compute-0 sudo[102790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:12 compute-0 python3.9[102792]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:12 compute-0 sudo[102790]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:12 compute-0 sudo[102942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wplfglxmspjamjnewsjxuhefqxzutaon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065772.3047194-271-201039083263405/AnsiballZ_stat.py'
Nov 25 10:16:12 compute-0 sudo[102942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:12 compute-0 python3.9[102944]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:12 compute-0 sudo[102942]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:13 compute-0 sudo[103020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmeltqffxkawgcepsbvdwlbjavojszcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065772.3047194-271-201039083263405/AnsiballZ_file.py'
Nov 25 10:16:13 compute-0 sudo[103020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:13 compute-0 python3.9[103022]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:13 compute-0 sudo[103020]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:13 compute-0 sudo[103172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-safdfgmhikemqpudtohvkgwjvspengia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065773.4626071-283-197204119338797/AnsiballZ_systemd.py'
Nov 25 10:16:13 compute-0 sudo[103172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:14 compute-0 python3.9[103174]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:14 compute-0 systemd[1]: Reloading.
Nov 25 10:16:14 compute-0 systemd-rc-local-generator[103201]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:16:14 compute-0 systemd-sysv-generator[103205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:16:14 compute-0 sudo[103172]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:14 compute-0 sudo[103361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjzzzziifctytspwokopufzdavksgqbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065774.5204651-291-270721854739368/AnsiballZ_stat.py'
Nov 25 10:16:14 compute-0 sudo[103361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:14 compute-0 python3.9[103363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:15 compute-0 sudo[103361]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:15 compute-0 sudo[103439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwuknhkfpuruhogwauyefobgeohiiyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065774.5204651-291-270721854739368/AnsiballZ_file.py'
Nov 25 10:16:15 compute-0 sudo[103439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:15 compute-0 python3.9[103441]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:15 compute-0 sudo[103439]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:15 compute-0 sudo[103591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryxcbuuosdulhngvfzzhqriwysixvkdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065775.6709802-303-250885670693052/AnsiballZ_stat.py'
Nov 25 10:16:15 compute-0 sudo[103591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:16 compute-0 python3.9[103593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:16 compute-0 sudo[103591]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:16 compute-0 sudo[103669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzchgovlsbniaucafycsjrpmorqfxuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065775.6709802-303-250885670693052/AnsiballZ_file.py'
Nov 25 10:16:16 compute-0 sudo[103669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:16 compute-0 python3.9[103671]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:16 compute-0 sudo[103669]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:17 compute-0 sudo[103821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsucprfxsdomafcmhnxhrqwbtvdtgdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065776.8787832-315-134904806242806/AnsiballZ_systemd.py'
Nov 25 10:16:17 compute-0 sudo[103821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:17 compute-0 python3.9[103823]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:17 compute-0 systemd[1]: Reloading.
Nov 25 10:16:17 compute-0 systemd-sysv-generator[103855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:16:17 compute-0 systemd-rc-local-generator[103852]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:16:17 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 10:16:17 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 10:16:17 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 10:16:17 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 10:16:17 compute-0 sudo[103821]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:18 compute-0 sudo[104015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnoijycthidlmgsheczllaucqujkoygi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065778.1568644-325-164715740897710/AnsiballZ_file.py'
Nov 25 10:16:18 compute-0 sudo[104015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:18 compute-0 python3.9[104017]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:18 compute-0 sudo[104015]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:19 compute-0 sudo[104167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpgydzqiilsoslvwixhoartiuxvdvnif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065779.5080602-333-59293188999170/AnsiballZ_stat.py'
Nov 25 10:16:19 compute-0 sudo[104167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:20 compute-0 python3.9[104169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:20 compute-0 sudo[104167]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:20 compute-0 sudo[104290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sahpotsuukfymizdzrvumfmidmsgferx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065779.5080602-333-59293188999170/AnsiballZ_copy.py'
Nov 25 10:16:20 compute-0 sudo[104290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:20 compute-0 python3.9[104292]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764065779.5080602-333-59293188999170/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:20 compute-0 sudo[104290]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:21 compute-0 sudo[104442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jugjuxycgdylspbfznohzslslbpgpsfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065780.869746-350-105421575500049/AnsiballZ_file.py'
Nov 25 10:16:21 compute-0 sudo[104442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:21 compute-0 python3.9[104444]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:16:21 compute-0 sudo[104442]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:21 compute-0 sudo[104594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlrtczlsgicmcymumrxgdqjicohdocgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065781.5354686-358-127498300452080/AnsiballZ_stat.py'
Nov 25 10:16:21 compute-0 sudo[104594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:21 compute-0 python3.9[104596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:16:21 compute-0 sudo[104594]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:22 compute-0 sudo[104717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrpdjqpjzfckbnajnstyhznjefxldwmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065781.5354686-358-127498300452080/AnsiballZ_copy.py'
Nov 25 10:16:22 compute-0 sudo[104717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:22 compute-0 python3.9[104719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764065781.5354686-358-127498300452080/.source.json _original_basename=.n814_7tt follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:22 compute-0 sudo[104717]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:22 compute-0 sudo[104869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aojjpmfsyomxmyeckgpdjzxkqcanfyju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065782.582095-373-118787529347479/AnsiballZ_file.py'
Nov 25 10:16:22 compute-0 sudo[104869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:22 compute-0 python3.9[104871]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:23 compute-0 sudo[104869]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:23 compute-0 sudo[105021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctdjpweodqopkbhqmerxxkqxvmqvnicp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065783.188127-381-12944572236712/AnsiballZ_stat.py'
Nov 25 10:16:23 compute-0 sudo[105021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:23 compute-0 sudo[105021]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:24 compute-0 sudo[105144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsstfbpzknqifwkxemvqhttungwhxuvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065783.188127-381-12944572236712/AnsiballZ_copy.py'
Nov 25 10:16:24 compute-0 sudo[105144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:24 compute-0 sudo[105144]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:24 compute-0 sudo[105296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awkmzfwtflzhtocmwngenssylthhqemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065784.5411453-398-26127706549156/AnsiballZ_container_config_data.py'
Nov 25 10:16:24 compute-0 sudo[105296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:25 compute-0 python3.9[105298]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 25 10:16:25 compute-0 sudo[105296]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:25 compute-0 sudo[105448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwjuqzinucyvcflfwglrnhxzwwyihxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065785.3804362-407-268996403353319/AnsiballZ_container_config_hash.py'
Nov 25 10:16:25 compute-0 sudo[105448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:26 compute-0 python3.9[105450]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:16:26 compute-0 sudo[105448]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:26 compute-0 sudo[105600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evfejgtxawtsdqsxxxotjujjtrmltgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065786.3185015-416-136837434154408/AnsiballZ_podman_container_info.py'
Nov 25 10:16:26 compute-0 sudo[105600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:26 compute-0 python3.9[105602]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 10:16:27 compute-0 sudo[105600]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:28 compute-0 sudo[105778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdruyxvvjptemvdddubxryeetelbcnrg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764065787.7341764-429-278759248608890/AnsiballZ_edpm_container_manage.py'
Nov 25 10:16:28 compute-0 sudo[105778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:28 compute-0 python3[105780]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:16:28 compute-0 podman[105815]: 2025-11-25 10:16:28.901077871 +0000 UTC m=+0.066196014 container create 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:16:28 compute-0 podman[105815]: 2025-11-25 10:16:28.867891067 +0000 UTC m=+0.033009220 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 10:16:28 compute-0 python3[105780]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 10:16:29 compute-0 sudo[105778]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:29 compute-0 sudo[106002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qufeiptswaikwgmayvwatbhlhluntuko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065789.2901654-437-31573334266135/AnsiballZ_stat.py'
Nov 25 10:16:29 compute-0 sudo[106002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:29 compute-0 python3.9[106004]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:16:29 compute-0 sudo[106002]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:30 compute-0 sudo[106156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgwcvoxxpjtnehbyjzngqszwmfrwmphw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065790.162154-446-152351166704610/AnsiballZ_file.py'
Nov 25 10:16:30 compute-0 sudo[106156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:30 compute-0 python3.9[106158]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:30 compute-0 sudo[106156]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:30 compute-0 sudo[106232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykvxbwxkkbzxcfxdsecnzeyeklgatybp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065790.162154-446-152351166704610/AnsiballZ_stat.py'
Nov 25 10:16:30 compute-0 sudo[106232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:31 compute-0 python3.9[106234]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:16:31 compute-0 sudo[106232]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:31 compute-0 sudo[106383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zudilwllotpgighigcrvvrvstqbhogkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065791.283388-446-220553379896075/AnsiballZ_copy.py'
Nov 25 10:16:31 compute-0 sudo[106383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:31 compute-0 python3.9[106385]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764065791.283388-446-220553379896075/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:31 compute-0 sudo[106383]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:32 compute-0 sudo[106459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiydmclvsegjbwekgciyojgiweolwppc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065791.283388-446-220553379896075/AnsiballZ_systemd.py'
Nov 25 10:16:32 compute-0 sudo[106459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:32 compute-0 python3.9[106461]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:16:32 compute-0 systemd[1]: Reloading.
Nov 25 10:16:32 compute-0 systemd-sysv-generator[106492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:16:32 compute-0 systemd-rc-local-generator[106486]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:16:32 compute-0 sudo[106459]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:33 compute-0 sudo[106570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euucbgqlomylfpgdtqxcwmkqvvctvxck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065791.283388-446-220553379896075/AnsiballZ_systemd.py'
Nov 25 10:16:33 compute-0 sudo[106570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:33 compute-0 python3.9[106572]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:33 compute-0 systemd[1]: Reloading.
Nov 25 10:16:33 compute-0 systemd-sysv-generator[106605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:16:33 compute-0 systemd-rc-local-generator[106602]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:16:33 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 25 10:16:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eec48a47634bfc7db5e4c8df2d07350fd9d7af90df095aef49e9f0a6c5fedd9/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 25 10:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8eec48a47634bfc7db5e4c8df2d07350fd9d7af90df095aef49e9f0a6c5fedd9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 10:16:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.
Nov 25 10:16:34 compute-0 podman[106613]: 2025-11-25 10:16:34.05133771 +0000 UTC m=+0.154102442 container init 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + sudo -E kolla_set_configs
Nov 25 10:16:34 compute-0 podman[106613]: 2025-11-25 10:16:34.081316864 +0000 UTC m=+0.184081546 container start 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:16:34 compute-0 edpm-start-podman-container[106613]: ovn_metadata_agent
Nov 25 10:16:34 compute-0 podman[106636]: 2025-11-25 10:16:34.167578206 +0000 UTC m=+0.066156713 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 10:16:34 compute-0 edpm-start-podman-container[106612]: Creating additional drop-in dependency for "ovn_metadata_agent" (1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15)
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Validating config file
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Copying service configuration files
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Writing out command to execute
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 25 10:16:34 compute-0 systemd[1]: Reloading.
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: ++ cat /run_command
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + CMD=neutron-ovn-metadata-agent
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + ARGS=
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + sudo kolla_copy_cacerts
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + [[ ! -n '' ]]
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + . kolla_extend_start
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: Running command: 'neutron-ovn-metadata-agent'
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + umask 0022
Nov 25 10:16:34 compute-0 ovn_metadata_agent[106629]: + exec neutron-ovn-metadata-agent
Nov 25 10:16:34 compute-0 systemd-sysv-generator[106704]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:16:34 compute-0 systemd-rc-local-generator[106701]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:16:34 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 25 10:16:34 compute-0 sudo[106570]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:34 compute-0 sshd-session[98384]: Connection closed by 192.168.122.30 port 45986
Nov 25 10:16:34 compute-0 sshd-session[98381]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:16:34 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Nov 25 10:16:34 compute-0 systemd[1]: session-22.scope: Consumed 36.026s CPU time.
Nov 25 10:16:34 compute-0 systemd-logind[822]: Session 22 logged out. Waiting for processes to exit.
Nov 25 10:16:34 compute-0 systemd-logind[822]: Removed session 22.
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.958 106634 INFO neutron.common.config [-] Logging enabled!
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.958 106634 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.958 106634 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.959 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.959 106634 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.959 106634 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.959 106634 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.959 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.959 106634 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.960 106634 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.960 106634 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.960 106634 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.960 106634 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.960 106634 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.960 106634 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.960 106634 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.961 106634 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.962 106634 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.963 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.964 106634 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.965 106634 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.966 106634 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.967 106634 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.968 106634 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.969 106634 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.970 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.971 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.972 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.973 106634 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.974 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.975 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.976 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.977 106634 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.978 106634 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.979 106634 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.980 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.981 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.982 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.983 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.984 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.985 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.986 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.987 106634 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.988 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.989 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.990 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.990 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.990 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.990 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.990 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.990 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.990 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.991 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.992 106634 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:35.992 106634 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.001 106634 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.001 106634 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.001 106634 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.001 106634 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.001 106634 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.012 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 3fcb3423-a4d5-4f72-950c-307893e4a985 (UUID: 3fcb3423-a4d5-4f72-950c-307893e4a985) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.036 106634 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.037 106634 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.037 106634 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.037 106634 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.039 106634 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.045 106634 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.050 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '3fcb3423-a4d5-4f72-950c-307893e4a985'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], external_ids={}, name=3fcb3423-a4d5-4f72-950c-307893e4a985, nb_cfg_timestamp=1764065746535, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.051 106634 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7efe86320d90>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.052 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.052 106634 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.052 106634 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.052 106634 INFO oslo_service.service [-] Starting 1 workers
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.056 106634 DEBUG oslo_service.service [-] Started child 106741 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.059 106634 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpsq5_49k0/privsep.sock']
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.060 106741 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-441073'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.093 106741 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.094 106741 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.094 106741 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.101 106741 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.110 106741 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.118 106741 INFO eventlet.wsgi.server [-] (106741) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 25 10:16:36 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.750 106634 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.751 106634 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsq5_49k0/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.601 106746 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.605 106746 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.607 106746 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.608 106746 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106746
Nov 25 10:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:36.753 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[cbdad8db-f80e-4180-976b-c42a47b7d60f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:16:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:37.330 106746 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:16:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:37.330 106746 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:16:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:37.330 106746 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:16:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:37.917 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea9c66d-b791-455d-b7ee-d74516f5dab3]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:16:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:37.921 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, column=external_ids, values=({'neutron:ovn-metadata-id': 'a0c3ef8e-b599-5967-b10e-243ee7a8e5d8'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.747 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.778 106634 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.778 106634 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.779 106634 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.779 106634 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.779 106634 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.779 106634 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.780 106634 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.780 106634 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.781 106634 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.781 106634 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.781 106634 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.782 106634 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.782 106634 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.782 106634 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.783 106634 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.783 106634 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.783 106634 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.784 106634 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.784 106634 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.784 106634 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.784 106634 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.785 106634 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.785 106634 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.785 106634 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.786 106634 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.786 106634 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.787 106634 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.787 106634 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.787 106634 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.787 106634 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.788 106634 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.788 106634 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.788 106634 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.789 106634 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.789 106634 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.789 106634 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.790 106634 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.790 106634 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.790 106634 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.791 106634 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.791 106634 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.791 106634 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.792 106634 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.792 106634 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.793 106634 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.793 106634 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.793 106634 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.794 106634 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.794 106634 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.794 106634 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.794 106634 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.795 106634 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.795 106634 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.795 106634 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.795 106634 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.795 106634 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.796 106634 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.796 106634 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.796 106634 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.797 106634 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.797 106634 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.797 106634 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.797 106634 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.798 106634 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.798 106634 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.798 106634 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.798 106634 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.798 106634 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.799 106634 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.799 106634 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.799 106634 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.799 106634 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.800 106634 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.800 106634 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.800 106634 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.800 106634 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.801 106634 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.801 106634 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.801 106634 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.801 106634 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.802 106634 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.802 106634 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.802 106634 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.802 106634 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.803 106634 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.803 106634 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.803 106634 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.803 106634 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.804 106634 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.804 106634 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.804 106634 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.804 106634 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.805 106634 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.805 106634 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.805 106634 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.805 106634 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.806 106634 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.806 106634 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.806 106634 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.806 106634 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.806 106634 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.807 106634 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.807 106634 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.807 106634 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.807 106634 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.807 106634 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.808 106634 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.808 106634 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.808 106634 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.809 106634 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.809 106634 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.809 106634 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.810 106634 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.810 106634 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.810 106634 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.810 106634 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.810 106634 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.811 106634 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.811 106634 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.811 106634 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.811 106634 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.812 106634 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.812 106634 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.812 106634 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.813 106634 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.813 106634 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.813 106634 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.813 106634 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.814 106634 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.814 106634 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.814 106634 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.814 106634 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.815 106634 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.815 106634 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.815 106634 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.815 106634 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.816 106634 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.816 106634 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.816 106634 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.816 106634 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.817 106634 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.817 106634 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.817 106634 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.817 106634 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.817 106634 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.818 106634 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.818 106634 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.818 106634 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.818 106634 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.819 106634 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.819 106634 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.819 106634 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.819 106634 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.820 106634 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.820 106634 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.820 106634 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.820 106634 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.821 106634 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.821 106634 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.821 106634 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.821 106634 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.821 106634 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.822 106634 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.822 106634 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.822 106634 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.822 106634 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.822 106634 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.823 106634 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.823 106634 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.823 106634 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.823 106634 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.823 106634 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.824 106634 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.824 106634 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.824 106634 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.824 106634 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.825 106634 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.825 106634 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.825 106634 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.825 106634 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.825 106634 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.825 106634 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.825 106634 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.826 106634 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.826 106634 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.826 106634 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.826 106634 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.826 106634 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.826 106634 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.826 106634 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.827 106634 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.827 106634 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.827 106634 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.827 106634 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.827 106634 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.827 106634 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.827 106634 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.828 106634 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.828 106634 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.828 106634 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.828 106634 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.828 106634 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.828 106634 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.828 106634 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.829 106634 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.830 106634 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.831 106634 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.831 106634 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.831 106634 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.831 106634 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.831 106634 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.831 106634 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.831 106634 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.832 106634 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.832 106634 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.832 106634 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.832 106634 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.832 106634 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.832 106634 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.833 106634 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.833 106634 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.833 106634 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.833 106634 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.833 106634 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.833 106634 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.833 106634 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.834 106634 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.834 106634 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.834 106634 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.834 106634 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.834 106634 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.834 106634 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.835 106634 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.835 106634 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.835 106634 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.835 106634 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.835 106634 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.835 106634 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.835 106634 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.836 106634 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.836 106634 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.836 106634 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.836 106634 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.836 106634 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.836 106634 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.836 106634 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.837 106634 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.837 106634 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.837 106634 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.837 106634 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.837 106634 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.838 106634 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.838 106634 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.838 106634 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.838 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.838 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.838 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.838 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.839 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.839 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.839 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.839 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.839 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.839 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.839 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.840 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.840 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.840 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.840 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.840 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.840 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.841 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.841 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.841 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.841 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.841 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.841 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.842 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.842 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.842 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.842 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.842 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.842 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.842 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.843 106634 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.843 106634 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.843 106634 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.843 106634 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.843 106634 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:16:38 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:16:38.843 106634 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:16:39 compute-0 podman[106751]: 2025-11-25 10:16:39.030757941 +0000 UTC m=+0.131055749 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:16:41 compute-0 sshd-session[106777]: Accepted publickey for zuul from 192.168.122.30 port 36742 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:16:41 compute-0 systemd-logind[822]: New session 23 of user zuul.
Nov 25 10:16:41 compute-0 systemd[1]: Started Session 23 of User zuul.
Nov 25 10:16:41 compute-0 sshd-session[106777]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:16:42 compute-0 python3.9[106930]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:16:43 compute-0 sudo[107084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoqcjsuntwzjmsqwyhzuqkoaudgzbzyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065803.1315184-34-159968583684375/AnsiballZ_command.py'
Nov 25 10:16:43 compute-0 sudo[107084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:43 compute-0 python3.9[107086]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:16:43 compute-0 sudo[107084]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:45 compute-0 sudo[107249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wetfrtddykitbancorwoquhpnjbihiuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065804.308043-45-40374337126833/AnsiballZ_systemd_service.py'
Nov 25 10:16:45 compute-0 sudo[107249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:45 compute-0 python3.9[107251]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:16:45 compute-0 systemd[1]: Reloading.
Nov 25 10:16:45 compute-0 systemd-rc-local-generator[107276]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:16:45 compute-0 systemd-sysv-generator[107281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:16:45 compute-0 sudo[107249]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:46 compute-0 python3.9[107435]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:16:46 compute-0 network[107452]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:16:46 compute-0 network[107453]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:16:46 compute-0 network[107454]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:16:50 compute-0 sudo[107713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uadkrnbhjbuwmfxuyvxvwwuunavtqduk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065810.1672978-64-114422363654344/AnsiballZ_systemd_service.py'
Nov 25 10:16:50 compute-0 sudo[107713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:50 compute-0 python3.9[107715]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:50 compute-0 sudo[107713]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:51 compute-0 sudo[107866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvxhvnechlorpdvdlqcaxtmoscncyrlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065811.1001494-64-140664138509759/AnsiballZ_systemd_service.py'
Nov 25 10:16:51 compute-0 sudo[107866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:51 compute-0 python3.9[107868]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:51 compute-0 sudo[107866]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:52 compute-0 sudo[108019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klwoxjgmpsbiskdpvornrmqaotlgemoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065811.9631355-64-162775703579341/AnsiballZ_systemd_service.py'
Nov 25 10:16:52 compute-0 sudo[108019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:52 compute-0 python3.9[108021]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:52 compute-0 sudo[108019]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:52 compute-0 sudo[108172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-serqybjwbzsavskuookevloikomjwaoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065812.7230656-64-211736328658276/AnsiballZ_systemd_service.py'
Nov 25 10:16:52 compute-0 sudo[108172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:53 compute-0 python3.9[108174]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:53 compute-0 sudo[108172]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:53 compute-0 sudo[108325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvvaziardkffvarfcqbxysxtsmzhzaad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065813.4470038-64-75297684655489/AnsiballZ_systemd_service.py'
Nov 25 10:16:53 compute-0 sudo[108325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:54 compute-0 python3.9[108327]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:54 compute-0 sudo[108325]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:54 compute-0 sudo[108478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqucbzjrzuhtclllvfeuobrrfnhgnarf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065814.2670033-64-269864730439403/AnsiballZ_systemd_service.py'
Nov 25 10:16:54 compute-0 sudo[108478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:54 compute-0 python3.9[108480]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:54 compute-0 sudo[108478]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:55 compute-0 sudo[108631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhnbqweobdyznvadbwhisjmouuqzhinw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065815.0875673-64-245134961815026/AnsiballZ_systemd_service.py'
Nov 25 10:16:55 compute-0 sudo[108631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:55 compute-0 python3.9[108633]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:16:55 compute-0 sudo[108631]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:56 compute-0 sudo[108784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmafwkdlebkiecugpasdibwexgzkcjjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065815.993617-116-130985966694341/AnsiballZ_file.py'
Nov 25 10:16:56 compute-0 sudo[108784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:56 compute-0 python3.9[108786]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:56 compute-0 sudo[108784]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:57 compute-0 sudo[108936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogkktieifsdvoiwaljyeagmhndktxbir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065816.8974524-116-166539506947160/AnsiballZ_file.py'
Nov 25 10:16:57 compute-0 sudo[108936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:57 compute-0 python3.9[108938]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:57 compute-0 sudo[108936]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:58 compute-0 sudo[109088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmurmcjhgroufvmbzsjnjtennfqvhneh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065817.5972438-116-187688204954644/AnsiballZ_file.py'
Nov 25 10:16:58 compute-0 sudo[109088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:58 compute-0 python3.9[109090]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:58 compute-0 sudo[109088]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:58 compute-0 sudo[109240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpwhmzulunhnoytpfkbshmorglufpffz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065818.3489976-116-139403146546062/AnsiballZ_file.py'
Nov 25 10:16:58 compute-0 sudo[109240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:58 compute-0 python3.9[109242]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:58 compute-0 sudo[109240]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:59 compute-0 sudo[109392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgqgifkburjtidorutcverndvcxbxxfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065819.0220838-116-35224896046940/AnsiballZ_file.py'
Nov 25 10:16:59 compute-0 sudo[109392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:16:59 compute-0 python3.9[109394]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:16:59 compute-0 sudo[109392]: pam_unix(sudo:session): session closed for user root
Nov 25 10:16:59 compute-0 sudo[109544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwwaxzaellnytkaexynosvcxoerlltpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065819.650296-116-260462378394053/AnsiballZ_file.py'
Nov 25 10:16:59 compute-0 sudo[109544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:00 compute-0 python3.9[109546]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:00 compute-0 sudo[109544]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:00 compute-0 sudo[109696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnssbifqaijskewofdvjurmnsmdccnrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065820.361528-116-219163022336491/AnsiballZ_file.py'
Nov 25 10:17:00 compute-0 sudo[109696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:00 compute-0 python3.9[109698]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:00 compute-0 sudo[109696]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:01 compute-0 sudo[109848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbsupqjqeqmihokicnhaeohxqyckwjrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065821.0979965-166-203137539656014/AnsiballZ_file.py'
Nov 25 10:17:01 compute-0 sudo[109848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:01 compute-0 python3.9[109850]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:01 compute-0 sudo[109848]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:02 compute-0 sudo[110000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehdzqnvxinntuamndstlcjjgeftwitvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065821.7608144-166-134625282151211/AnsiballZ_file.py'
Nov 25 10:17:02 compute-0 sudo[110000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:02 compute-0 python3.9[110002]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:02 compute-0 sudo[110000]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:02 compute-0 sudo[110152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iazdqmmtznexlrmvuxxppwwvuibyegjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065822.443839-166-85226184678020/AnsiballZ_file.py'
Nov 25 10:17:02 compute-0 sudo[110152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:02 compute-0 python3.9[110154]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:02 compute-0 sudo[110152]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:03 compute-0 sudo[110304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqeuvkjiskoxmdqezfnamcnfxccwjro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065823.1247709-166-117048149020284/AnsiballZ_file.py'
Nov 25 10:17:03 compute-0 sudo[110304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:03 compute-0 python3.9[110306]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:03 compute-0 sudo[110304]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:04 compute-0 sudo[110456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcnsweaetdaiqnjsjwptqjiesppborxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065823.9234257-166-257003052447218/AnsiballZ_file.py'
Nov 25 10:17:04 compute-0 sudo[110456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:04 compute-0 podman[110458]: 2025-11-25 10:17:04.279003372 +0000 UTC m=+0.068982232 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 10:17:04 compute-0 python3.9[110459]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:04 compute-0 sudo[110456]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:04 compute-0 sudo[110629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xssoomuuvwcawjrcrnzqotwlkgvwudtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065824.5317576-166-209775997853281/AnsiballZ_file.py'
Nov 25 10:17:04 compute-0 sudo[110629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:04 compute-0 python3.9[110631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:04 compute-0 sudo[110629]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:05 compute-0 sudo[110781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqieauguqrbkuecszjaklakvzwlmexrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065825.2797174-166-90823593738155/AnsiballZ_file.py'
Nov 25 10:17:05 compute-0 sudo[110781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:05 compute-0 python3.9[110783]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:17:05 compute-0 sudo[110781]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:06 compute-0 sudo[110933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwrdekqmtkoaibrclatsqmwobtmcluhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065826.0008252-217-46928762714354/AnsiballZ_command.py'
Nov 25 10:17:06 compute-0 sudo[110933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:06 compute-0 python3.9[110935]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:06 compute-0 sudo[110933]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:07 compute-0 python3.9[111087]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:17:07 compute-0 sudo[111237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpuqpblnmpzqimnssrynzistlwmaukmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065827.5611172-235-22205578063253/AnsiballZ_systemd_service.py'
Nov 25 10:17:07 compute-0 sudo[111237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:08 compute-0 python3.9[111239]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:17:08 compute-0 systemd[1]: Reloading.
Nov 25 10:17:08 compute-0 systemd-rc-local-generator[111265]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:17:08 compute-0 systemd-sysv-generator[111270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:17:08 compute-0 sudo[111237]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:09 compute-0 sudo[111423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwiojiydioyygrfzqtxaxwcsyeukddcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065828.7717743-243-50783025665300/AnsiballZ_command.py'
Nov 25 10:17:09 compute-0 sudo[111423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:09 compute-0 podman[111425]: 2025-11-25 10:17:09.210516292 +0000 UTC m=+0.112024180 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:17:09 compute-0 python3.9[111426]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:09 compute-0 sudo[111423]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:09 compute-0 sudo[111602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssvbyeilxsxneeselbogoixhgwylwido ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065829.460636-243-165637719194213/AnsiballZ_command.py'
Nov 25 10:17:09 compute-0 sudo[111602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:09 compute-0 python3.9[111604]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:09 compute-0 sudo[111602]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:10 compute-0 sudo[111755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfevkdhzcxlenwvmziexgmrmhkxsqrnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065830.1219895-243-171640656537462/AnsiballZ_command.py'
Nov 25 10:17:10 compute-0 sudo[111755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:10 compute-0 python3.9[111757]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:10 compute-0 sudo[111755]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:11 compute-0 sudo[111908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egojoirngwkkqpdkpyhulmkcvhqxclgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065830.7627642-243-25709429426359/AnsiballZ_command.py'
Nov 25 10:17:11 compute-0 sudo[111908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:11 compute-0 python3.9[111910]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:11 compute-0 sudo[111908]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:11 compute-0 sudo[112061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pejjdepsoezlwxtwxsbmjptekfoxezck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065831.473064-243-203841404197684/AnsiballZ_command.py'
Nov 25 10:17:11 compute-0 sudo[112061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:11 compute-0 python3.9[112063]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:11 compute-0 sudo[112061]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:12 compute-0 sudo[112214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywqadmbtcansexertxqgtgkyzswdhii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065832.138838-243-58541996082474/AnsiballZ_command.py'
Nov 25 10:17:12 compute-0 sudo[112214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:12 compute-0 python3.9[112216]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:12 compute-0 sudo[112214]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:13 compute-0 sudo[112367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjbvvswfvfgzjmaybnqnpvnovdjxcljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065832.7908547-243-104059693905875/AnsiballZ_command.py'
Nov 25 10:17:13 compute-0 sudo[112367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:13 compute-0 python3.9[112369]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:17:13 compute-0 sudo[112367]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:14 compute-0 sudo[112520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebvmccyrvuvzibiejcvrvqjebhmmojsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065833.8108482-297-13267508042179/AnsiballZ_getent.py'
Nov 25 10:17:14 compute-0 sudo[112520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:14 compute-0 python3.9[112522]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 25 10:17:14 compute-0 sudo[112520]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:15 compute-0 sudo[112673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrbnccmxlgaowqwecihekfwdyjllfey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065834.644656-305-75205937194305/AnsiballZ_group.py'
Nov 25 10:17:15 compute-0 sudo[112673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:15 compute-0 python3.9[112675]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:17:15 compute-0 groupadd[112676]: group added to /etc/group: name=libvirt, GID=42473
Nov 25 10:17:15 compute-0 groupadd[112676]: group added to /etc/gshadow: name=libvirt
Nov 25 10:17:15 compute-0 groupadd[112676]: new group: name=libvirt, GID=42473
Nov 25 10:17:15 compute-0 sudo[112673]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:16 compute-0 sudo[112831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqivmotsjrnumfffgpvkmoyoofspdkcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065835.573272-313-150267921618027/AnsiballZ_user.py'
Nov 25 10:17:16 compute-0 sudo[112831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:16 compute-0 python3.9[112833]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 10:17:16 compute-0 useradd[112835]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 10:17:16 compute-0 sudo[112831]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:17 compute-0 sudo[112991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dshlcuwqhfsjuyicniuvumrhpyxafreb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065836.7193449-324-240910133456070/AnsiballZ_setup.py'
Nov 25 10:17:17 compute-0 sudo[112991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:17 compute-0 python3.9[112993]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:17:17 compute-0 sudo[112991]: pam_unix(sudo:session): session closed for user root
Nov 25 10:17:18 compute-0 sudo[113075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcdjcpzlttvexgaaeizcvzahoymbxpeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065836.7193449-324-240910133456070/AnsiballZ_dnf.py'
Nov 25 10:17:18 compute-0 sudo[113075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:17:18 compute-0 python3.9[113077]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:17:34 compute-0 podman[113265]: 2025-11-25 10:17:34.959733406 +0000 UTC m=+0.062847178 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:17:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:17:36.003 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:17:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:17:36.004 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:17:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:17:36.004 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:17:39 compute-0 podman[113289]: 2025-11-25 10:17:39.987798034 +0000 UTC m=+0.095844298 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 10:17:47 compute-0 kernel: SELinux:  Converting 2757 SID table entries...
Nov 25 10:17:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:17:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:17:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:17:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:17:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:17:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:17:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:17:57 compute-0 kernel: SELinux:  Converting 2757 SID table entries...
Nov 25 10:17:57 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:17:57 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:17:57 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:17:57 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:17:57 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:17:57 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:17:57 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:18:05 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 25 10:18:05 compute-0 podman[113332]: 2025-11-25 10:18:05.953043651 +0000 UTC m=+0.062857669 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:18:10 compute-0 podman[113352]: 2025-11-25 10:18:10.982647554 +0000 UTC m=+0.097514296 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 10:18:17 compute-0 sshd-session[116284]: Connection closed by authenticating user root 171.244.51.45 port 32824 [preauth]
Nov 25 10:18:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:18:36.004 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:18:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:18:36.005 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:18:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:18:36.005 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:18:36 compute-0 podman[126766]: 2025-11-25 10:18:36.94877226 +0000 UTC m=+0.061119804 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 10:18:41 compute-0 podman[129432]: 2025-11-25 10:18:41.968033992 +0000 UTC m=+0.087419937 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 10:18:56 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Nov 25 10:18:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 25 10:18:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 25 10:18:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 25 10:18:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 25 10:18:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 25 10:18:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 25 10:18:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 25 10:18:57 compute-0 groupadd[130236]: group added to /etc/group: name=dnsmasq, GID=992
Nov 25 10:18:57 compute-0 groupadd[130236]: group added to /etc/gshadow: name=dnsmasq
Nov 25 10:18:57 compute-0 groupadd[130236]: new group: name=dnsmasq, GID=992
Nov 25 10:18:57 compute-0 useradd[130243]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 25 10:18:58 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 10:18:58 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 25 10:18:58 compute-0 dbus-broker-launch[812]: Noticed file-system modification, trigger reload.
Nov 25 10:18:59 compute-0 groupadd[130256]: group added to /etc/group: name=clevis, GID=991
Nov 25 10:18:59 compute-0 groupadd[130256]: group added to /etc/gshadow: name=clevis
Nov 25 10:18:59 compute-0 groupadd[130256]: new group: name=clevis, GID=991
Nov 25 10:18:59 compute-0 useradd[130263]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 25 10:18:59 compute-0 usermod[130273]: add 'clevis' to group 'tss'
Nov 25 10:18:59 compute-0 usermod[130273]: add 'clevis' to shadow group 'tss'
Nov 25 10:19:04 compute-0 polkitd[43613]: Reloading rules
Nov 25 10:19:04 compute-0 polkitd[43613]: Collecting garbage unconditionally...
Nov 25 10:19:04 compute-0 polkitd[43613]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 10:19:04 compute-0 polkitd[43613]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 10:19:04 compute-0 polkitd[43613]: Finished loading, compiling and executing 3 rules
Nov 25 10:19:04 compute-0 polkitd[43613]: Reloading rules
Nov 25 10:19:04 compute-0 polkitd[43613]: Collecting garbage unconditionally...
Nov 25 10:19:04 compute-0 polkitd[43613]: Loading rules from directory /etc/polkit-1/rules.d
Nov 25 10:19:04 compute-0 polkitd[43613]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 25 10:19:04 compute-0 polkitd[43613]: Finished loading, compiling and executing 3 rules
Nov 25 10:19:05 compute-0 groupadd[130460]: group added to /etc/group: name=ceph, GID=167
Nov 25 10:19:05 compute-0 groupadd[130460]: group added to /etc/gshadow: name=ceph
Nov 25 10:19:05 compute-0 groupadd[130460]: new group: name=ceph, GID=167
Nov 25 10:19:05 compute-0 useradd[130466]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 25 10:19:07 compute-0 podman[130475]: 2025-11-25 10:19:07.757084736 +0000 UTC m=+0.060072744 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 10:19:08 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 25 10:19:08 compute-0 sshd[1011]: Received signal 15; terminating.
Nov 25 10:19:08 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 25 10:19:08 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 25 10:19:08 compute-0 systemd[1]: sshd.service: Consumed 1.863s CPU time, read 32.0K from disk, written 0B to disk.
Nov 25 10:19:08 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 25 10:19:08 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 25 10:19:08 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 10:19:08 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 10:19:08 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 25 10:19:08 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 25 10:19:08 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 25 10:19:08 compute-0 sshd[131002]: Server listening on 0.0.0.0 port 22.
Nov 25 10:19:08 compute-0 sshd[131002]: Server listening on :: port 22.
Nov 25 10:19:08 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 25 10:19:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:19:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:19:10 compute-0 systemd[1]: Reloading.
Nov 25 10:19:10 compute-0 systemd-sysv-generator[131262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:10 compute-0 systemd-rc-local-generator[131253]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:19:12 compute-0 podman[133463]: 2025-11-25 10:19:12.99466852 +0000 UTC m=+0.102844394 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 10:19:13 compute-0 sudo[113075]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:14 compute-0 sudo[135324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frkhnvtahsupgxymruuxqlklgqzwvrkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065953.6460068-336-276961975315238/AnsiballZ_systemd.py'
Nov 25 10:19:14 compute-0 sudo[135324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:14 compute-0 python3.9[135345]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:19:14 compute-0 systemd[1]: Reloading.
Nov 25 10:19:14 compute-0 systemd-rc-local-generator[135827]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:14 compute-0 systemd-sysv-generator[135831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:14 compute-0 sudo[135324]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:15 compute-0 sudo[136656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cygbvjkluhaznifneolxkbaxpqwscwai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065955.0417485-336-193270733404521/AnsiballZ_systemd.py'
Nov 25 10:19:15 compute-0 sudo[136656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:15 compute-0 python3.9[136682]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:19:15 compute-0 systemd[1]: Reloading.
Nov 25 10:19:15 compute-0 systemd-rc-local-generator[137238]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:15 compute-0 systemd-sysv-generator[137242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:15 compute-0 sudo[136656]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:16 compute-0 sudo[137991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhokkkyzrjicgjgbguvqomfwcnxouled ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065956.060394-336-195346754160308/AnsiballZ_systemd.py'
Nov 25 10:19:16 compute-0 sudo[137991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:16 compute-0 python3.9[138010]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:19:16 compute-0 systemd[1]: Reloading.
Nov 25 10:19:16 compute-0 systemd-rc-local-generator[138550]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:16 compute-0 systemd-sysv-generator[138553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:16 compute-0 sudo[137991]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:17 compute-0 sudo[139409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxankicjsieaucnfocylrkglrljtgzvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065957.0682738-336-52208448358333/AnsiballZ_systemd.py'
Nov 25 10:19:17 compute-0 sudo[139409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:17 compute-0 python3.9[139427]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:19:17 compute-0 systemd[1]: Reloading.
Nov 25 10:19:17 compute-0 systemd-rc-local-generator[139937]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:17 compute-0 systemd-sysv-generator[139941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:18 compute-0 sudo[139409]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:18 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:19:18 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:19:18 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.779s CPU time.
Nov 25 10:19:18 compute-0 systemd[1]: run-r2720cf652a704d7a881651ba993c0445.service: Deactivated successfully.
Nov 25 10:19:18 compute-0 sudo[140576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekeucvkhdchqgwpmhciqcshxvixutjxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065958.1943738-365-247653997535850/AnsiballZ_systemd.py'
Nov 25 10:19:18 compute-0 sudo[140576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:18 compute-0 python3.9[140579]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:18 compute-0 systemd[1]: Reloading.
Nov 25 10:19:18 compute-0 systemd-sysv-generator[140614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:18 compute-0 systemd-rc-local-generator[140611]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:19 compute-0 sudo[140576]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:19 compute-0 sudo[140768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nljqsdhtwpujfkhslvuzfmoeovzdzpel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065959.2793028-365-146276193053007/AnsiballZ_systemd.py'
Nov 25 10:19:19 compute-0 sudo[140768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:19 compute-0 python3.9[140770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:19 compute-0 systemd[1]: Reloading.
Nov 25 10:19:20 compute-0 systemd-sysv-generator[140801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:20 compute-0 systemd-rc-local-generator[140797]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:20 compute-0 sudo[140768]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:20 compute-0 sudo[140958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwwcjbxfuoisnwvbodmmogajbvgoplnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065960.3949296-365-139405236474273/AnsiballZ_systemd.py'
Nov 25 10:19:20 compute-0 sudo[140958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:21 compute-0 python3.9[140960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:21 compute-0 systemd[1]: Reloading.
Nov 25 10:19:21 compute-0 systemd-rc-local-generator[140989]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:21 compute-0 systemd-sysv-generator[140993]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:21 compute-0 sudo[140958]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:21 compute-0 sudo[141147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdadqlrcmhiskpzjicehcarpluisfquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065961.5358737-365-29851348478515/AnsiballZ_systemd.py'
Nov 25 10:19:21 compute-0 sudo[141147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:22 compute-0 python3.9[141149]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:22 compute-0 sudo[141147]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:22 compute-0 sudo[141302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndfajuzlbbxhpcptoyriqiwgluyowmht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065962.4261878-365-142833464803461/AnsiballZ_systemd.py'
Nov 25 10:19:22 compute-0 sudo[141302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:22 compute-0 python3.9[141304]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:23 compute-0 systemd[1]: Reloading.
Nov 25 10:19:23 compute-0 systemd-rc-local-generator[141331]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:23 compute-0 systemd-sysv-generator[141334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:23 compute-0 sudo[141302]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:23 compute-0 sudo[141492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bskpatrsvaqpvfycdatmizucvzpirzfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065963.5005293-401-238034413167111/AnsiballZ_systemd.py'
Nov 25 10:19:23 compute-0 sudo[141492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:24 compute-0 python3.9[141494]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 25 10:19:24 compute-0 systemd[1]: Reloading.
Nov 25 10:19:24 compute-0 systemd-rc-local-generator[141520]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:19:24 compute-0 systemd-sysv-generator[141526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:19:24 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 25 10:19:24 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 25 10:19:24 compute-0 sudo[141492]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:25 compute-0 sudo[141685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plmqqriynropvuisithvwkjrgdulqpzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065964.747387-409-85765924475/AnsiballZ_systemd.py'
Nov 25 10:19:25 compute-0 sudo[141685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:25 compute-0 python3.9[141687]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:25 compute-0 sudo[141685]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:25 compute-0 sudo[141840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrvzswcffltqjgcjilwvkfikqylfvxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065965.5335386-409-167927091887397/AnsiballZ_systemd.py'
Nov 25 10:19:25 compute-0 sudo[141840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:26 compute-0 python3.9[141842]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:26 compute-0 sudo[141840]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:26 compute-0 sudo[141995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voiwyxdtzvrunlptjvhflcxykzcixyor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065966.3107133-409-223961102908402/AnsiballZ_systemd.py'
Nov 25 10:19:26 compute-0 sudo[141995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:26 compute-0 python3.9[141997]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:26 compute-0 sudo[141995]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:27 compute-0 sudo[142150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzoqbatwqnwwckcgevyxowazjyacussa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065967.1215234-409-183134820700338/AnsiballZ_systemd.py'
Nov 25 10:19:27 compute-0 sudo[142150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:27 compute-0 python3.9[142152]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:27 compute-0 sudo[142150]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:28 compute-0 sudo[142305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjbpuhvjrmqrbgeqepihvbuavudrgxwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065967.9128733-409-62684851880532/AnsiballZ_systemd.py'
Nov 25 10:19:28 compute-0 sudo[142305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:28 compute-0 python3.9[142307]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:28 compute-0 sudo[142305]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:28 compute-0 sudo[142460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzgziizevalyznjlqzltjzretqpjaanb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065968.6914213-409-211802391484700/AnsiballZ_systemd.py'
Nov 25 10:19:28 compute-0 sudo[142460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:29 compute-0 python3.9[142462]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:29 compute-0 sudo[142460]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:29 compute-0 sudo[142615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-houmpkgfmyccsapilohdjdmjhcfwyirw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065969.5021408-409-163884588151498/AnsiballZ_systemd.py'
Nov 25 10:19:29 compute-0 sudo[142615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:30 compute-0 python3.9[142617]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:30 compute-0 sudo[142615]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:30 compute-0 sudo[142770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upbrfqmrnutpzdjatrpdsqjwnaabfays ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065970.2565851-409-143430689310984/AnsiballZ_systemd.py'
Nov 25 10:19:30 compute-0 sudo[142770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:30 compute-0 python3.9[142772]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:30 compute-0 sudo[142770]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:31 compute-0 sudo[142925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icgadbltqwicpogolktinjjdgmlxugnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065971.0829995-409-179551846029528/AnsiballZ_systemd.py'
Nov 25 10:19:31 compute-0 sudo[142925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:31 compute-0 python3.9[142927]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:31 compute-0 sudo[142925]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:32 compute-0 sudo[143080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlhauyimgegeamsonimxqprqujrresmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065971.8443615-409-177427270453550/AnsiballZ_systemd.py'
Nov 25 10:19:32 compute-0 sudo[143080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:32 compute-0 python3.9[143082]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:32 compute-0 sudo[143080]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:32 compute-0 sudo[143235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxvtwwgaohwxdoiheubygysdvevrmuuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065972.5854454-409-52888827898794/AnsiballZ_systemd.py'
Nov 25 10:19:32 compute-0 sudo[143235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:33 compute-0 python3.9[143237]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:33 compute-0 sudo[143235]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:33 compute-0 sudo[143390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqrlwmihdgsgvezydnpwgbtigtydlghl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065973.4429176-409-253603952649013/AnsiballZ_systemd.py'
Nov 25 10:19:33 compute-0 sudo[143390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:34 compute-0 python3.9[143392]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:34 compute-0 sudo[143390]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:34 compute-0 sudo[143545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtuhicyktpfylrbdewsgijxpunqeuvum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065974.3682435-409-117193145027576/AnsiballZ_systemd.py'
Nov 25 10:19:34 compute-0 sudo[143545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:34 compute-0 python3.9[143547]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:35 compute-0 sudo[143545]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:35 compute-0 sudo[143700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffgvrvhllfqgfnipqipybglqteobvqlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065975.1990206-409-65334469446581/AnsiballZ_systemd.py'
Nov 25 10:19:35 compute-0 sudo[143700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:35 compute-0 python3.9[143702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 25 10:19:35 compute-0 sudo[143700]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:19:36.006 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:19:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:19:36.008 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:19:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:19:36.008 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:19:36 compute-0 sudo[143855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyqzmenfskzkwkpwyqrkduwhvqxqegmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065976.293615-511-17225637071864/AnsiballZ_file.py'
Nov 25 10:19:36 compute-0 sudo[143855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:36 compute-0 python3.9[143857]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:19:36 compute-0 sudo[143855]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:37 compute-0 sudo[144007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlenqawelcwsaeblunvwbsgyycfimbhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065977.0790713-511-145920684540972/AnsiballZ_file.py'
Nov 25 10:19:37 compute-0 sudo[144007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:37 compute-0 python3.9[144009]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:19:37 compute-0 sudo[144007]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:37 compute-0 podman[144086]: 2025-11-25 10:19:37.990214303 +0000 UTC m=+0.083033454 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:19:38 compute-0 sudo[144176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhgcolcwgatteogojutljwulhbvdtpnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065977.7579925-511-34804505844884/AnsiballZ_file.py'
Nov 25 10:19:38 compute-0 sudo[144176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:38 compute-0 python3.9[144178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:19:38 compute-0 sudo[144176]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:38 compute-0 sudo[144328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evidrzkplesxlrfdxfhpbjtgujcbbktb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065978.3956378-511-159346472679954/AnsiballZ_file.py'
Nov 25 10:19:38 compute-0 sudo[144328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:38 compute-0 python3.9[144330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:19:38 compute-0 sudo[144328]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:39 compute-0 sudo[144480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrfbmympcrrxeylkajfpfpqqdkpbnfji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065979.0021698-511-240925700800968/AnsiballZ_file.py'
Nov 25 10:19:39 compute-0 sudo[144480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:39 compute-0 python3.9[144482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:19:39 compute-0 sudo[144480]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:39 compute-0 sudo[144632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbpdxdvugobnbakdsnxdsmsqlxmvzgjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065979.5994465-511-273139989900797/AnsiballZ_file.py'
Nov 25 10:19:39 compute-0 sudo[144632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:40 compute-0 python3.9[144634]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:19:40 compute-0 sudo[144632]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:40 compute-0 sudo[144784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-socqplwaxysuxdgyulisgxlyeqjbpnvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065980.204476-554-167685517665733/AnsiballZ_stat.py'
Nov 25 10:19:40 compute-0 sudo[144784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:40 compute-0 python3.9[144786]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:40 compute-0 sudo[144784]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:41 compute-0 sudo[144909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcbsxjiqpcjslazmydhrhpncmjpbmmfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065980.204476-554-167685517665733/AnsiballZ_copy.py'
Nov 25 10:19:41 compute-0 sudo[144909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:41 compute-0 python3.9[144911]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065980.204476-554-167685517665733/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:41 compute-0 sudo[144909]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:41 compute-0 sudo[145061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzjggmxavivdrqmqbltkpsdmhcsqtpjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065981.7356203-554-235136921230790/AnsiballZ_stat.py'
Nov 25 10:19:41 compute-0 sudo[145061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:42 compute-0 python3.9[145063]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:42 compute-0 sudo[145061]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:42 compute-0 sudo[145186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugoowcvmigtwkjfjyyllhqkhtbqxowjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065981.7356203-554-235136921230790/AnsiballZ_copy.py'
Nov 25 10:19:42 compute-0 sudo[145186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:42 compute-0 python3.9[145188]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065981.7356203-554-235136921230790/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:42 compute-0 sudo[145186]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:43 compute-0 sudo[145350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxrjacmqxhtsahtpryvqivzmnngkybuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065982.8116066-554-248671654905017/AnsiballZ_stat.py'
Nov 25 10:19:43 compute-0 sudo[145350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:43 compute-0 podman[145312]: 2025-11-25 10:19:43.107200757 +0000 UTC m=+0.071928300 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 25 10:19:43 compute-0 python3.9[145358]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:43 compute-0 sudo[145350]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:43 compute-0 sudo[145488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elnbiuzjpyhiexyknfwnkklpsklrbmjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065982.8116066-554-248671654905017/AnsiballZ_copy.py'
Nov 25 10:19:43 compute-0 sudo[145488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:43 compute-0 python3.9[145490]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065982.8116066-554-248671654905017/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:43 compute-0 sudo[145488]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:44 compute-0 sudo[145640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjjxcrmyzoexopdxdhkhkxiroknmwsxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065983.9932594-554-164780672083151/AnsiballZ_stat.py'
Nov 25 10:19:44 compute-0 sudo[145640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:44 compute-0 python3.9[145642]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:44 compute-0 sudo[145640]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:44 compute-0 sudo[145765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bagqmbkmbyjosyghdwtxyerrekmxpbgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065983.9932594-554-164780672083151/AnsiballZ_copy.py'
Nov 25 10:19:44 compute-0 sudo[145765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:45 compute-0 python3.9[145767]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065983.9932594-554-164780672083151/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:45 compute-0 sudo[145765]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:45 compute-0 sudo[145917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgfptvlaatmmxqpucjekyfemskitexeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065985.226535-554-52675169960843/AnsiballZ_stat.py'
Nov 25 10:19:45 compute-0 sudo[145917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:45 compute-0 python3.9[145919]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:45 compute-0 sudo[145917]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:46 compute-0 sudo[146042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwsqteaxbhwonlcagzafpionczuibxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065985.226535-554-52675169960843/AnsiballZ_copy.py'
Nov 25 10:19:46 compute-0 sudo[146042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:46 compute-0 python3.9[146044]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065985.226535-554-52675169960843/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:46 compute-0 sudo[146042]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:46 compute-0 sudo[146194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhiyjswvtlzwajgwsrabbayevvnlezvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065986.5397472-554-173300466216181/AnsiballZ_stat.py'
Nov 25 10:19:46 compute-0 sudo[146194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:47 compute-0 python3.9[146196]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:47 compute-0 sudo[146194]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:47 compute-0 sudo[146319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bduguicwmwclnijjezeyqpvycoxuwvny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065986.5397472-554-173300466216181/AnsiballZ_copy.py'
Nov 25 10:19:47 compute-0 sudo[146319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:47 compute-0 python3.9[146321]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065986.5397472-554-173300466216181/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:47 compute-0 sudo[146319]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:47 compute-0 sudo[146471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggchgkqpxdksaxcsoulljkcyjmjmsdbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065987.7154386-554-10436926584116/AnsiballZ_stat.py'
Nov 25 10:19:47 compute-0 sudo[146471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:48 compute-0 python3.9[146473]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:48 compute-0 sudo[146471]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:48 compute-0 sudo[146594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chwahftkszojwqqlairghukuudeehrfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065987.7154386-554-10436926584116/AnsiballZ_copy.py'
Nov 25 10:19:48 compute-0 sudo[146594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:48 compute-0 python3.9[146596]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065987.7154386-554-10436926584116/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:48 compute-0 sudo[146594]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:49 compute-0 sudo[146746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfbxyroenfexqielqpbqdatikylpqijh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065988.9084687-554-248627144466268/AnsiballZ_stat.py'
Nov 25 10:19:49 compute-0 sudo[146746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:49 compute-0 python3.9[146748]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:19:49 compute-0 sudo[146746]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:49 compute-0 sudo[146871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbhqxdgnznlimvzbadvlzutgvmxdpfjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065988.9084687-554-248627144466268/AnsiballZ_copy.py'
Nov 25 10:19:49 compute-0 sudo[146871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:49 compute-0 python3.9[146873]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764065988.9084687-554-248627144466268/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:49 compute-0 sudo[146871]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:50 compute-0 sudo[147023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-criqlhdxdumfkjxajvmfuxiwaysqkdzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065990.143735-667-30142638569760/AnsiballZ_command.py'
Nov 25 10:19:50 compute-0 sudo[147023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:50 compute-0 python3.9[147025]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 25 10:19:50 compute-0 sudo[147023]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:51 compute-0 sudo[147176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjgejmyvagaqasrarfhfibbduarwmmnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065990.8668542-676-222799702077207/AnsiballZ_file.py'
Nov 25 10:19:51 compute-0 sudo[147176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:51 compute-0 python3.9[147178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:51 compute-0 sudo[147176]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:51 compute-0 sudo[147328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keffuwrsstoogwhemavcydehngwfodkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065991.522312-676-117733513634619/AnsiballZ_file.py'
Nov 25 10:19:51 compute-0 sudo[147328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:52 compute-0 python3.9[147330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:52 compute-0 sudo[147328]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:52 compute-0 sudo[147480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vavqiusuhfelfscmgypcbwqymeljfpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065992.1982255-676-141016752150783/AnsiballZ_file.py'
Nov 25 10:19:52 compute-0 sudo[147480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:52 compute-0 python3.9[147482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:52 compute-0 sudo[147480]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:53 compute-0 sudo[147632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acjfrcmgpegmfsnxyffzksddujtmtuwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065992.9681478-676-54538719971315/AnsiballZ_file.py'
Nov 25 10:19:53 compute-0 sudo[147632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:53 compute-0 python3.9[147634]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:53 compute-0 sudo[147632]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:53 compute-0 sudo[147784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtflhuswqcvmwesiuwkihxcdqxhceibm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065993.6805677-676-244358858468682/AnsiballZ_file.py'
Nov 25 10:19:53 compute-0 sudo[147784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:54 compute-0 python3.9[147786]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:54 compute-0 sudo[147784]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:54 compute-0 sudo[147936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isdbmdcjnvbjcwekjnfqudisjavsorvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065994.286284-676-41598413060282/AnsiballZ_file.py'
Nov 25 10:19:54 compute-0 sudo[147936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:54 compute-0 python3.9[147938]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:54 compute-0 sudo[147936]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:55 compute-0 sudo[148088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtcsjeeyvjzwgezminrnqjwtdldrjmop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065994.9646516-676-90619922183146/AnsiballZ_file.py'
Nov 25 10:19:55 compute-0 sudo[148088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:55 compute-0 python3.9[148090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:55 compute-0 sudo[148088]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:56 compute-0 sudo[148240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcekgwlcvnvlkpuszvgwwmcaloetdhex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065995.6894004-676-221553338791079/AnsiballZ_file.py'
Nov 25 10:19:56 compute-0 sudo[148240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:56 compute-0 python3.9[148242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:56 compute-0 sudo[148240]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:56 compute-0 sudo[148392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axpllkuzmdnxrwtnpxzyujuumgyqtytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065996.3843157-676-236818037702367/AnsiballZ_file.py'
Nov 25 10:19:56 compute-0 sudo[148392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:56 compute-0 python3.9[148394]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:56 compute-0 sudo[148392]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:57 compute-0 sudo[148544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prjdixppykzgrwlrqqxhypxywyyjrlye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065997.0239537-676-7812422793747/AnsiballZ_file.py'
Nov 25 10:19:57 compute-0 sudo[148544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:57 compute-0 python3.9[148546]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:57 compute-0 sudo[148544]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:57 compute-0 sudo[148696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmkdungnqnqobvyihxajcmpfzibrhbsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065997.6378236-676-198215711655776/AnsiballZ_file.py'
Nov 25 10:19:57 compute-0 sudo[148696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:58 compute-0 python3.9[148698]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:58 compute-0 sudo[148696]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:58 compute-0 sudo[148848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peafkcpsgvjalnemytygocndszhumkce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065998.3004673-676-152792341046639/AnsiballZ_file.py'
Nov 25 10:19:58 compute-0 sudo[148848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:58 compute-0 python3.9[148850]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:58 compute-0 sudo[148848]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:59 compute-0 sudo[149000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alqggqstghxzlpjbxdsdxowebifjukfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065999.0076506-676-155563860581105/AnsiballZ_file.py'
Nov 25 10:19:59 compute-0 sudo[149000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:19:59 compute-0 python3.9[149002]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:19:59 compute-0 sudo[149000]: pam_unix(sudo:session): session closed for user root
Nov 25 10:19:59 compute-0 sudo[149154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhsdiepgektcdymewgefekzalrxlcxtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764065999.6460335-676-173091356624687/AnsiballZ_file.py'
Nov 25 10:19:59 compute-0 sudo[149154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:00 compute-0 python3.9[149156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:00 compute-0 sudo[149154]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:00 compute-0 sshd-session[149003]: Invalid user support from 78.128.112.74 port 41790
Nov 25 10:20:00 compute-0 sshd-session[149003]: Connection closed by invalid user support 78.128.112.74 port 41790 [preauth]
Nov 25 10:20:00 compute-0 sudo[149306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsqslqafikwxmduyekkjxcizzgebpvik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066000.3069181-775-82770900137723/AnsiballZ_stat.py'
Nov 25 10:20:00 compute-0 sudo[149306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:00 compute-0 python3.9[149308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:00 compute-0 sudo[149306]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:01 compute-0 sudo[149429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjvigkwvbuquqbmyivfaybmlkhmvvlvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066000.3069181-775-82770900137723/AnsiballZ_copy.py'
Nov 25 10:20:01 compute-0 sudo[149429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:01 compute-0 python3.9[149431]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066000.3069181-775-82770900137723/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:01 compute-0 sudo[149429]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:01 compute-0 sudo[149581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvwtbysaoeiapkpljaeyzxmbpuidsbdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066001.478695-775-237677048276191/AnsiballZ_stat.py'
Nov 25 10:20:01 compute-0 sudo[149581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:01 compute-0 python3.9[149583]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:01 compute-0 sudo[149581]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:02 compute-0 sudo[149704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnbybfsyocywprvjkqzbhxuruqrszbae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066001.478695-775-237677048276191/AnsiballZ_copy.py'
Nov 25 10:20:02 compute-0 sudo[149704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:02 compute-0 python3.9[149706]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066001.478695-775-237677048276191/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:02 compute-0 sudo[149704]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:02 compute-0 sudo[149856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thsevusctmqrlfkfrdxgkxfcfhslgmtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066002.685256-775-263832747787203/AnsiballZ_stat.py'
Nov 25 10:20:02 compute-0 sudo[149856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:03 compute-0 python3.9[149858]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:03 compute-0 sudo[149856]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:03 compute-0 sudo[149979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgpfwkqkjppuxpnmtfemmubzfdgiguef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066002.685256-775-263832747787203/AnsiballZ_copy.py'
Nov 25 10:20:03 compute-0 sudo[149979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:03 compute-0 python3.9[149981]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066002.685256-775-263832747787203/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:03 compute-0 sudo[149979]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:04 compute-0 sudo[150131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrgrwruuxgwsabqcbxzhkvvndepocja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066003.9338815-775-246317314312764/AnsiballZ_stat.py'
Nov 25 10:20:04 compute-0 sudo[150131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:04 compute-0 python3.9[150133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:04 compute-0 sudo[150131]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:04 compute-0 sudo[150254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xozhtxzmfqvhxbkkpwrxhfumltmmzvqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066003.9338815-775-246317314312764/AnsiballZ_copy.py'
Nov 25 10:20:04 compute-0 sudo[150254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:05 compute-0 python3.9[150256]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066003.9338815-775-246317314312764/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:05 compute-0 sudo[150254]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:05 compute-0 sudo[150406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjpmmfzysihrqkwqxcsablfqsuqqvkrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066005.21932-775-275598085598462/AnsiballZ_stat.py'
Nov 25 10:20:05 compute-0 sudo[150406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:05 compute-0 python3.9[150408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:05 compute-0 sudo[150406]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:06 compute-0 sudo[150529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzcfiykykdegxsjvrxueueicewgkjpzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066005.21932-775-275598085598462/AnsiballZ_copy.py'
Nov 25 10:20:06 compute-0 sudo[150529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:06 compute-0 python3.9[150531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066005.21932-775-275598085598462/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:06 compute-0 sudo[150529]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:06 compute-0 sudo[150681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xveuoyoyhwnrbdmllkidfuulgshumont ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066006.5298655-775-90664268699374/AnsiballZ_stat.py'
Nov 25 10:20:06 compute-0 sudo[150681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:06 compute-0 python3.9[150683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:07 compute-0 sudo[150681]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:07 compute-0 sudo[150804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uilbmezzmyroxwlliwgfcljzdnhrgnok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066006.5298655-775-90664268699374/AnsiballZ_copy.py'
Nov 25 10:20:07 compute-0 sudo[150804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:07 compute-0 python3.9[150806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066006.5298655-775-90664268699374/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:07 compute-0 sudo[150804]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:08 compute-0 podman[150930]: 2025-11-25 10:20:08.078445526 +0000 UTC m=+0.051956697 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 10:20:08 compute-0 sudo[150976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isetaqxzevhbveroiaptbjivjqebzway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066007.7302957-775-99031545475394/AnsiballZ_stat.py'
Nov 25 10:20:08 compute-0 sudo[150976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:08 compute-0 python3.9[150978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:08 compute-0 sudo[150976]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:08 compute-0 sudo[151100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grqxcsugbsclsitbxxvzrbmqheazhgmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066007.7302957-775-99031545475394/AnsiballZ_copy.py'
Nov 25 10:20:08 compute-0 sudo[151100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:08 compute-0 python3.9[151102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066007.7302957-775-99031545475394/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:08 compute-0 sudo[151100]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:09 compute-0 sudo[151252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aligkrdicrqgzylhqbilwyokqdyamwyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066009.0440094-775-164940887900060/AnsiballZ_stat.py'
Nov 25 10:20:09 compute-0 sudo[151252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:09 compute-0 python3.9[151254]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:09 compute-0 sudo[151252]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:09 compute-0 sudo[151375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wavxofkfeooumenfxxpfaoxhxzrawtnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066009.0440094-775-164940887900060/AnsiballZ_copy.py'
Nov 25 10:20:09 compute-0 sudo[151375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:10 compute-0 python3.9[151377]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066009.0440094-775-164940887900060/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:10 compute-0 sudo[151375]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:10 compute-0 sudo[151527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvaqnrnsscnpyjzejxboecijfunxadt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066010.1777472-775-206183647251843/AnsiballZ_stat.py'
Nov 25 10:20:10 compute-0 sudo[151527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:10 compute-0 python3.9[151529]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:10 compute-0 sudo[151527]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:11 compute-0 sudo[151650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvyupakbhmwnnzwyebxotcussjsmneed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066010.1777472-775-206183647251843/AnsiballZ_copy.py'
Nov 25 10:20:11 compute-0 sudo[151650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:11 compute-0 python3.9[151652]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066010.1777472-775-206183647251843/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:11 compute-0 sudo[151650]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:11 compute-0 sudo[151802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqbsdxogcbhypacujpqtoukexhatmnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066011.3911772-775-199322690989908/AnsiballZ_stat.py'
Nov 25 10:20:11 compute-0 sudo[151802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:11 compute-0 python3.9[151804]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:11 compute-0 sudo[151802]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:12 compute-0 sudo[151925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yphqpyvcespgecupvthijddsbuevuuot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066011.3911772-775-199322690989908/AnsiballZ_copy.py'
Nov 25 10:20:12 compute-0 sudo[151925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:12 compute-0 python3.9[151927]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066011.3911772-775-199322690989908/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:12 compute-0 sudo[151925]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:12 compute-0 sudo[152077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlgvududjbnotiiplmxbqdjtddgjahud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066012.6158648-775-72871282409020/AnsiballZ_stat.py'
Nov 25 10:20:12 compute-0 sudo[152077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:13 compute-0 python3.9[152079]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:13 compute-0 sudo[152077]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:13 compute-0 sudo[152219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjwqvugnygryctkiswgtekkhjhsvksea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066012.6158648-775-72871282409020/AnsiballZ_copy.py'
Nov 25 10:20:13 compute-0 sudo[152219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:13 compute-0 podman[152174]: 2025-11-25 10:20:13.527926493 +0000 UTC m=+0.078976416 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 10:20:13 compute-0 python3.9[152224]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066012.6158648-775-72871282409020/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:13 compute-0 sudo[152219]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:14 compute-0 sudo[152378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flafnhcyanudlgkjoefkwqujebjevtzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066013.8419805-775-276484025273301/AnsiballZ_stat.py'
Nov 25 10:20:14 compute-0 sudo[152378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:14 compute-0 python3.9[152380]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:14 compute-0 sudo[152378]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:14 compute-0 sudo[152501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysmbrkjiouvdvnoacdjkhwelgbkgamif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066013.8419805-775-276484025273301/AnsiballZ_copy.py'
Nov 25 10:20:14 compute-0 sudo[152501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:14 compute-0 python3.9[152503]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066013.8419805-775-276484025273301/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:14 compute-0 sudo[152501]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:15 compute-0 sudo[152653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryntsvqeoobrjvjqmgtealvjhzzgupbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066015.1645675-775-212521897487332/AnsiballZ_stat.py'
Nov 25 10:20:15 compute-0 sudo[152653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:15 compute-0 python3.9[152655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:15 compute-0 sudo[152653]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:16 compute-0 sudo[152776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hddltwloimtuhndqkjggzbyjnfspwchv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066015.1645675-775-212521897487332/AnsiballZ_copy.py'
Nov 25 10:20:16 compute-0 sudo[152776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:16 compute-0 python3.9[152778]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066015.1645675-775-212521897487332/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:16 compute-0 sudo[152776]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:16 compute-0 sudo[152928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urtzwlrjbgaewgrihmmfqrpcygvqptxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066016.4327235-775-54465245846076/AnsiballZ_stat.py'
Nov 25 10:20:16 compute-0 sudo[152928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:16 compute-0 python3.9[152930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:16 compute-0 sudo[152928]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:17 compute-0 sudo[153051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teysmtpkbxhsphzbsbtqoeuezrjozwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066016.4327235-775-54465245846076/AnsiballZ_copy.py'
Nov 25 10:20:17 compute-0 sudo[153051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:17 compute-0 python3.9[153053]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066016.4327235-775-54465245846076/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:17 compute-0 sudo[153051]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:18 compute-0 python3.9[153203]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:20:18 compute-0 sudo[153356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtrkkkemszavmhemfzrkooyzasbpgngy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066018.416493-981-27417005537967/AnsiballZ_seboolean.py'
Nov 25 10:20:18 compute-0 sudo[153356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:19 compute-0 python3.9[153358]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 25 10:20:20 compute-0 sudo[153356]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:21 compute-0 sudo[153512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbcrgzrohyadgkmyoroiggqhlrllcir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066020.897778-989-188283066469990/AnsiballZ_copy.py'
Nov 25 10:20:21 compute-0 dbus-broker-launch[813]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 25 10:20:21 compute-0 sudo[153512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:21 compute-0 python3.9[153514]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:21 compute-0 sudo[153512]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:22 compute-0 sudo[153664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqpeeuxdxyeragorvmhtmbccljspcoae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066021.7909365-989-96333983872026/AnsiballZ_copy.py'
Nov 25 10:20:22 compute-0 sudo[153664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:22 compute-0 python3.9[153666]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:22 compute-0 sudo[153664]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:22 compute-0 sudo[153816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfqsklgyeyzqkxnphgvsugmqtjadyhag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066022.4693813-989-129210006144382/AnsiballZ_copy.py'
Nov 25 10:20:22 compute-0 sudo[153816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:22 compute-0 python3.9[153818]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:23 compute-0 sudo[153816]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:23 compute-0 sudo[153968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwvywjqfjluraqpvcgldjaubiyhbqaxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066023.1691415-989-224606681078397/AnsiballZ_copy.py'
Nov 25 10:20:23 compute-0 sudo[153968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:23 compute-0 python3.9[153970]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:23 compute-0 sudo[153968]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:24 compute-0 sudo[154120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deyfmlnlivcfdvctwddsfgtxfbnlmvud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066023.7683856-989-273775658963834/AnsiballZ_copy.py'
Nov 25 10:20:24 compute-0 sudo[154120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:24 compute-0 python3.9[154122]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:24 compute-0 sudo[154120]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:24 compute-0 sudo[154272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyyvobsyelvbmqvfmkngdloziqastfuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066024.4235184-1025-82714060580870/AnsiballZ_copy.py'
Nov 25 10:20:24 compute-0 sudo[154272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:24 compute-0 python3.9[154274]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:24 compute-0 sudo[154272]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:25 compute-0 sudo[154424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nccnoepvlrmagjzaljgryptcqtrownny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066025.0797875-1025-97919088912629/AnsiballZ_copy.py'
Nov 25 10:20:25 compute-0 sudo[154424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:25 compute-0 python3.9[154426]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:25 compute-0 sudo[154424]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:26 compute-0 sudo[154576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khihfvfklrzluvbaptdiezjekfkyhxfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066025.7537017-1025-242835507241919/AnsiballZ_copy.py'
Nov 25 10:20:26 compute-0 sudo[154576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:26 compute-0 python3.9[154578]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:26 compute-0 sudo[154576]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:26 compute-0 sudo[154728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdkuqtqunvytcugkrofnsoqzgujwkshk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066026.4494364-1025-68862207911455/AnsiballZ_copy.py'
Nov 25 10:20:26 compute-0 sudo[154728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:26 compute-0 python3.9[154730]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:26 compute-0 sudo[154728]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:27 compute-0 sudo[154880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlszqemdjdvlawnpftssakuizubdpuim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066027.0990906-1025-248390894852643/AnsiballZ_copy.py'
Nov 25 10:20:27 compute-0 sudo[154880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:27 compute-0 python3.9[154882]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:27 compute-0 sudo[154880]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:28 compute-0 sudo[155032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amezlcxusytljhyrpaiebkdftccjzmbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066027.7648833-1061-167830242829983/AnsiballZ_systemd.py'
Nov 25 10:20:28 compute-0 sudo[155032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:28 compute-0 python3.9[155034]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:20:28 compute-0 systemd[1]: Reloading.
Nov 25 10:20:28 compute-0 systemd-sysv-generator[155066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:20:28 compute-0 systemd-rc-local-generator[155062]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:20:28 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 25 10:20:28 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 25 10:20:28 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 25 10:20:28 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 25 10:20:28 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 25 10:20:28 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 25 10:20:28 compute-0 sudo[155032]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:29 compute-0 sudo[155226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snuhdixtsroiayluvmkksenuyyxcmzbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066028.9561257-1061-75231960653807/AnsiballZ_systemd.py'
Nov 25 10:20:29 compute-0 sudo[155226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:29 compute-0 python3.9[155228]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:20:29 compute-0 systemd[1]: Reloading.
Nov 25 10:20:29 compute-0 systemd-rc-local-generator[155253]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:20:29 compute-0 systemd-sysv-generator[155257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:20:29 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 25 10:20:29 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 25 10:20:29 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 25 10:20:29 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 25 10:20:29 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 25 10:20:29 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 25 10:20:29 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 10:20:29 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 10:20:29 compute-0 sudo[155226]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:30 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 25 10:20:30 compute-0 sudo[155443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfvatjjjymrgyibmfrflpmubnvysveao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066030.1357663-1061-104543039916436/AnsiballZ_systemd.py'
Nov 25 10:20:30 compute-0 sudo[155443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:30 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 25 10:20:30 compute-0 python3.9[155445]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:20:30 compute-0 systemd[1]: Reloading.
Nov 25 10:20:30 compute-0 systemd-rc-local-generator[155477]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:20:30 compute-0 systemd-sysv-generator[155480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:20:31 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 25 10:20:31 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 25 10:20:31 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 25 10:20:31 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 25 10:20:31 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 25 10:20:31 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 25 10:20:31 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 10:20:31 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 10:20:31 compute-0 sudo[155443]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:31 compute-0 sudo[155663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udtzzxghhfymxfjtykmktuavvlstuqxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066031.315413-1061-91530715818634/AnsiballZ_systemd.py'
Nov 25 10:20:31 compute-0 sudo[155663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:31 compute-0 python3.9[155665]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:20:31 compute-0 systemd[1]: Reloading.
Nov 25 10:20:31 compute-0 systemd-rc-local-generator[155689]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:20:32 compute-0 systemd-sysv-generator[155694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:20:32 compute-0 setroubleshoot[155317]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 968f75b9-f304-4c37-bcd5-f2872d81461c
Nov 25 10:20:32 compute-0 setroubleshoot[155317]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 10:20:32 compute-0 setroubleshoot[155317]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 968f75b9-f304-4c37-bcd5-f2872d81461c
Nov 25 10:20:32 compute-0 setroubleshoot[155317]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 25 10:20:32 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 25 10:20:32 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 25 10:20:32 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 25 10:20:32 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 25 10:20:32 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 25 10:20:32 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 25 10:20:32 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 25 10:20:32 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 25 10:20:32 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 25 10:20:32 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 25 10:20:32 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 10:20:32 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 10:20:32 compute-0 sudo[155663]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:32 compute-0 sudo[155879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebptrnhsrzvpdqkearbczlmhbllqvcgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066032.436397-1061-255450334505764/AnsiballZ_systemd.py'
Nov 25 10:20:32 compute-0 sudo[155879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:33 compute-0 python3.9[155881]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:20:33 compute-0 systemd[1]: Reloading.
Nov 25 10:20:33 compute-0 systemd-sysv-generator[155908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:20:33 compute-0 systemd-rc-local-generator[155905]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:20:33 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 25 10:20:33 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 25 10:20:33 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 25 10:20:33 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 25 10:20:33 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 25 10:20:33 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 25 10:20:33 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 25 10:20:33 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 25 10:20:33 compute-0 sudo[155879]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:33 compute-0 sudo[156091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjyetfbblxtnvxevjphpccanincgullj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066033.6770847-1098-164607744796593/AnsiballZ_file.py'
Nov 25 10:20:33 compute-0 sudo[156091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:34 compute-0 python3.9[156093]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:34 compute-0 sudo[156091]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:34 compute-0 sudo[156243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxkdguywjutrtnrbbbynaaclhfoxmuai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066034.2779832-1106-257398393780776/AnsiballZ_find.py'
Nov 25 10:20:34 compute-0 sudo[156243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:34 compute-0 python3.9[156245]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:20:34 compute-0 sudo[156243]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:35 compute-0 sudo[156395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmecxthvgqektkctkwqfjbibaxonfntl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066035.2524047-1120-235133459378173/AnsiballZ_stat.py'
Nov 25 10:20:35 compute-0 sudo[156395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:35 compute-0 python3.9[156397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:35 compute-0 sudo[156395]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:20:36.007 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:20:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:20:36.008 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:20:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:20:36.009 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:20:36 compute-0 sudo[156518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edtpqpkqmcudidcrdrncpfaftqxbliyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066035.2524047-1120-235133459378173/AnsiballZ_copy.py'
Nov 25 10:20:36 compute-0 sudo[156518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:36 compute-0 python3.9[156520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066035.2524047-1120-235133459378173/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:36 compute-0 sudo[156518]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:36 compute-0 sudo[156670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miuiitfaobjszpejpetzspgdkjucawli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066036.6168568-1136-84165276772774/AnsiballZ_file.py'
Nov 25 10:20:36 compute-0 sudo[156670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:37 compute-0 python3.9[156672]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:37 compute-0 sudo[156670]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:37 compute-0 sudo[156822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfckpzvmivsdvacbwdfzrzkglebwsul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066037.2864583-1144-175234453655186/AnsiballZ_stat.py'
Nov 25 10:20:37 compute-0 sudo[156822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:37 compute-0 python3.9[156824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:37 compute-0 sudo[156822]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:38 compute-0 sudo[156900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldqykjlgkiijqpokmuqzwrratxeayhvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066037.2864583-1144-175234453655186/AnsiballZ_file.py'
Nov 25 10:20:38 compute-0 sudo[156900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:38 compute-0 python3.9[156902]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:38 compute-0 sudo[156900]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:38 compute-0 sudo[157067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmhupigagmytdktcnebnullcenwbmoqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066038.4435577-1156-279902662599580/AnsiballZ_stat.py'
Nov 25 10:20:38 compute-0 sudo[157067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:38 compute-0 podman[157026]: 2025-11-25 10:20:38.808331258 +0000 UTC m=+0.073344175 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 10:20:39 compute-0 python3.9[157073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:39 compute-0 sudo[157067]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:39 compute-0 sudo[157149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeluftniengsdkpzdrhiqmfizuudesmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066038.4435577-1156-279902662599580/AnsiballZ_file.py'
Nov 25 10:20:39 compute-0 sudo[157149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:39 compute-0 python3.9[157151]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=._szhflth recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:39 compute-0 sudo[157149]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:40 compute-0 sudo[157301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkfpyzjerbsztiqnlffmfobywpznozji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066039.6823938-1168-229723350134883/AnsiballZ_stat.py'
Nov 25 10:20:40 compute-0 sudo[157301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:40 compute-0 python3.9[157303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:40 compute-0 sudo[157301]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:40 compute-0 sudo[157379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwoherbmozzcpygqztjkjlhyfgwucvbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066039.6823938-1168-229723350134883/AnsiballZ_file.py'
Nov 25 10:20:40 compute-0 sudo[157379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:40 compute-0 python3.9[157381]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:40 compute-0 sudo[157379]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:41 compute-0 sudo[157531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bthznxukzzdwoxrcdqarniyxkbxfwcxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066041.1188633-1181-114603924509649/AnsiballZ_command.py'
Nov 25 10:20:41 compute-0 sudo[157531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:41 compute-0 python3.9[157533]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:20:41 compute-0 sudo[157531]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:42 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 25 10:20:42 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 25 10:20:42 compute-0 sudo[157685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpjmpxvtxisbycttjkvxhblnkgcwyarb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066041.9012268-1189-209363530739444/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 10:20:42 compute-0 sudo[157685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:42 compute-0 python3[157687]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 10:20:42 compute-0 sudo[157685]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:43 compute-0 sudo[157837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmlkrfjpprkvawjuiifktlyjpqhrbnod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066042.8511796-1197-13902636997191/AnsiballZ_stat.py'
Nov 25 10:20:43 compute-0 sudo[157837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:43 compute-0 python3.9[157839]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:43 compute-0 sudo[157837]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:43 compute-0 sudo[157915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avutgonerdvldgdosmyqtcqaqtoxyqim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066042.8511796-1197-13902636997191/AnsiballZ_file.py'
Nov 25 10:20:43 compute-0 sudo[157915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:43 compute-0 podman[157917]: 2025-11-25 10:20:43.677158033 +0000 UTC m=+0.084930936 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 10:20:43 compute-0 python3.9[157918]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:43 compute-0 sudo[157915]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:44 compute-0 sudo[158096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmueeaezwisbbktjedhydfeqvrqmsxqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066043.94699-1209-236897745158195/AnsiballZ_stat.py'
Nov 25 10:20:44 compute-0 sudo[158096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:44 compute-0 python3.9[158098]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:44 compute-0 sudo[158096]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:44 compute-0 sudo[158174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmtimwfotlwxwhcfasexofnbumgfidbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066043.94699-1209-236897745158195/AnsiballZ_file.py'
Nov 25 10:20:44 compute-0 sudo[158174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:44 compute-0 python3.9[158176]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:45 compute-0 sudo[158174]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:45 compute-0 sudo[158326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imvdarajgaklvrsbdvwvkamettutomsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066045.1728654-1221-64135649752942/AnsiballZ_stat.py'
Nov 25 10:20:45 compute-0 sudo[158326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:45 compute-0 python3.9[158328]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:45 compute-0 sudo[158326]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:45 compute-0 sudo[158404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yegtjmbliynjchcpupmsgvpfsackmsvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066045.1728654-1221-64135649752942/AnsiballZ_file.py'
Nov 25 10:20:45 compute-0 sudo[158404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:46 compute-0 python3.9[158406]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:46 compute-0 sudo[158404]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:46 compute-0 sudo[158556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuwjisrkxovjhgvkvvmkknesvkojheac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066046.306687-1233-265201918938092/AnsiballZ_stat.py'
Nov 25 10:20:46 compute-0 sudo[158556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:46 compute-0 python3.9[158558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:46 compute-0 sudo[158556]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:47 compute-0 sudo[158634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njgvxcoixggnlrmycyuwzdwaymrbxopn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066046.306687-1233-265201918938092/AnsiballZ_file.py'
Nov 25 10:20:47 compute-0 sudo[158634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:47 compute-0 python3.9[158636]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:47 compute-0 sudo[158634]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:47 compute-0 sudo[158786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afnyjlqrdmybeuajxircqxuggwqnyvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066047.5079188-1245-236512700397693/AnsiballZ_stat.py'
Nov 25 10:20:47 compute-0 sudo[158786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:48 compute-0 python3.9[158788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:48 compute-0 sudo[158786]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:48 compute-0 sudo[158912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lywvtbvcuwfokbjjmlpexlwhpwueagwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066047.5079188-1245-236512700397693/AnsiballZ_copy.py'
Nov 25 10:20:48 compute-0 sudo[158912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:48 compute-0 python3.9[158914]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066047.5079188-1245-236512700397693/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:48 compute-0 sudo[158912]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:49 compute-0 sudo[159064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnhjmfytnoymwpgrqwnuarckgshtrly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066049.1139696-1260-154725448328812/AnsiballZ_file.py'
Nov 25 10:20:49 compute-0 sudo[159064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:49 compute-0 python3.9[159066]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:49 compute-0 sudo[159064]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:50 compute-0 sudo[159216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjxynnujcizylfnketvshoqfquoxjciq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066049.8213284-1268-44061945059405/AnsiballZ_command.py'
Nov 25 10:20:50 compute-0 sudo[159216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:50 compute-0 python3.9[159218]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:20:50 compute-0 sudo[159216]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:50 compute-0 sudo[159371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfrbplywjwifqroknedhxkxzxgezlgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066050.4698005-1276-199031579596337/AnsiballZ_blockinfile.py'
Nov 25 10:20:50 compute-0 sudo[159371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:51 compute-0 python3.9[159373]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:51 compute-0 sudo[159371]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:51 compute-0 sudo[159523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxijjdgyqctsxqfrcxcaujsnqzhleqlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066051.3253677-1285-128769330225434/AnsiballZ_command.py'
Nov 25 10:20:51 compute-0 sudo[159523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:51 compute-0 python3.9[159525]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:20:51 compute-0 sudo[159523]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:52 compute-0 sudo[159676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftwvzknkupwldtqbgvxyvxlicsegvacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066051.9492779-1293-91233648088791/AnsiballZ_stat.py'
Nov 25 10:20:52 compute-0 sudo[159676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:52 compute-0 python3.9[159678]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:20:52 compute-0 sudo[159676]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:53 compute-0 sudo[159830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqbbbnauyvdinnnuyvffyatqsborkqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066052.734029-1301-275736595608358/AnsiballZ_command.py'
Nov 25 10:20:53 compute-0 sudo[159830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:53 compute-0 python3.9[159832]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:20:53 compute-0 sudo[159830]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:53 compute-0 sudo[159985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoyrlxfuovaeddqvbtytddjwlalxuyly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066053.482777-1309-165889422177635/AnsiballZ_file.py'
Nov 25 10:20:53 compute-0 sudo[159985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:53 compute-0 python3.9[159987]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:53 compute-0 sudo[159985]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:54 compute-0 sudo[160137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beeukubhierhrhvrvqzahsshnfautpuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066054.1175404-1317-198064456002040/AnsiballZ_stat.py'
Nov 25 10:20:54 compute-0 sudo[160137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:54 compute-0 python3.9[160139]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:54 compute-0 sudo[160137]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:54 compute-0 sudo[160260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtygalkiiaryhfwauedqxnuyzbifnway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066054.1175404-1317-198064456002040/AnsiballZ_copy.py'
Nov 25 10:20:54 compute-0 sudo[160260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:55 compute-0 python3.9[160262]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066054.1175404-1317-198064456002040/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:55 compute-0 sudo[160260]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:55 compute-0 sudo[160412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqvjbfubtiqwqmhijrughknwietbqtoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066055.3551164-1332-108497639481861/AnsiballZ_stat.py'
Nov 25 10:20:55 compute-0 sudo[160412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:55 compute-0 python3.9[160414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:55 compute-0 sudo[160412]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:56 compute-0 sudo[160535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyvybrhnqlvlmzuohkjscthrmlcmxvem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066055.3551164-1332-108497639481861/AnsiballZ_copy.py'
Nov 25 10:20:56 compute-0 sudo[160535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:56 compute-0 python3.9[160537]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066055.3551164-1332-108497639481861/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:56 compute-0 sudo[160535]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:56 compute-0 sudo[160687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvvcmdqutyevtjxvhhneaerflbklvqhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066056.7101026-1347-2324298006706/AnsiballZ_stat.py'
Nov 25 10:20:56 compute-0 sudo[160687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:57 compute-0 python3.9[160689]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:20:57 compute-0 sudo[160687]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:57 compute-0 sudo[160810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iipvvhhdwprpfbkvdwmktubpegqwogzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066056.7101026-1347-2324298006706/AnsiballZ_copy.py'
Nov 25 10:20:57 compute-0 sudo[160810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:57 compute-0 python3.9[160812]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066056.7101026-1347-2324298006706/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:20:57 compute-0 sudo[160810]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:58 compute-0 sudo[160962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiylyvcvgtjryzefsqdkuspctczaffzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066057.9057736-1362-195891023324722/AnsiballZ_systemd.py'
Nov 25 10:20:58 compute-0 sudo[160962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:58 compute-0 python3.9[160964]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:20:58 compute-0 systemd[1]: Reloading.
Nov 25 10:20:58 compute-0 systemd-rc-local-generator[160992]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:20:58 compute-0 systemd-sysv-generator[160995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:20:58 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 25 10:20:58 compute-0 sudo[160962]: pam_unix(sudo:session): session closed for user root
Nov 25 10:20:59 compute-0 sudo[161154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vghpwdeolqeqovjnvattluitsmdlsfnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066059.0355997-1370-215666330085004/AnsiballZ_systemd.py'
Nov 25 10:20:59 compute-0 sudo[161154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:20:59 compute-0 python3.9[161156]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 25 10:20:59 compute-0 systemd[1]: Reloading.
Nov 25 10:20:59 compute-0 systemd-rc-local-generator[161180]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:20:59 compute-0 systemd-sysv-generator[161184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:20:59 compute-0 systemd[1]: Reloading.
Nov 25 10:21:00 compute-0 systemd-sysv-generator[161223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:21:00 compute-0 systemd-rc-local-generator[161219]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:21:00 compute-0 sudo[161154]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:00 compute-0 sshd-session[106780]: Connection closed by 192.168.122.30 port 36742
Nov 25 10:21:00 compute-0 sshd-session[106777]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:21:00 compute-0 systemd-logind[822]: Session 23 logged out. Waiting for processes to exit.
Nov 25 10:21:00 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Nov 25 10:21:00 compute-0 systemd[1]: session-23.scope: Consumed 3min 31.429s CPU time.
Nov 25 10:21:00 compute-0 systemd-logind[822]: Removed session 23.
Nov 25 10:21:08 compute-0 sshd-session[161251]: Accepted publickey for zuul from 192.168.122.30 port 48506 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:21:08 compute-0 systemd-logind[822]: New session 24 of user zuul.
Nov 25 10:21:08 compute-0 systemd[1]: Started Session 24 of User zuul.
Nov 25 10:21:08 compute-0 sshd-session[161251]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:21:08 compute-0 podman[161319]: 2025-11-25 10:21:08.960777495 +0000 UTC m=+0.064259890 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:21:09 compute-0 python3.9[161423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:21:11 compute-0 python3.9[161577]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:21:11 compute-0 network[161594]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:21:11 compute-0 network[161595]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:21:11 compute-0 network[161596]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:21:13 compute-0 podman[161695]: 2025-11-25 10:21:13.85347566 +0000 UTC m=+0.123148284 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:21:15 compute-0 sudo[161892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfmgwiesmkmrccsmrrfzxrehmrdjcary ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066075.4718943-47-114085360523573/AnsiballZ_setup.py'
Nov 25 10:21:15 compute-0 sudo[161892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:16 compute-0 python3.9[161894]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:21:16 compute-0 sudo[161892]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:16 compute-0 sudo[161976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkttcwlafbqgvlufkaammxyliwkpncrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066075.4718943-47-114085360523573/AnsiballZ_dnf.py'
Nov 25 10:21:16 compute-0 sudo[161976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:17 compute-0 python3.9[161978]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:21:22 compute-0 sudo[161976]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:23 compute-0 sudo[162129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-staatwmcmuijrxxevfjyioozulcdhosy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066082.5004685-59-106028503142616/AnsiballZ_stat.py'
Nov 25 10:21:23 compute-0 sudo[162129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:23 compute-0 python3.9[162131]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:21:23 compute-0 sudo[162129]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:23 compute-0 sudo[162281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twattqinkgngekjibfqheisvgmvtdtjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066083.4971797-69-218313892936621/AnsiballZ_command.py'
Nov 25 10:21:23 compute-0 sudo[162281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:24 compute-0 python3.9[162283]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:21:24 compute-0 sudo[162281]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:24 compute-0 sudo[162434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikrjhcwxpihpmadeuygwkdddsccgsibv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066084.5344634-79-112833475845250/AnsiballZ_stat.py'
Nov 25 10:21:24 compute-0 sudo[162434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:25 compute-0 python3.9[162436]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:21:25 compute-0 sudo[162434]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:25 compute-0 sudo[162586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooubxqqzuejtdlepmvsepwppwjxrcuhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066085.193241-87-12883296613036/AnsiballZ_command.py'
Nov 25 10:21:25 compute-0 sudo[162586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:25 compute-0 python3.9[162588]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:21:25 compute-0 sudo[162586]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:26 compute-0 sudo[162739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbhjnubnoznvoryvgtrivcivujsalchl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066085.9774427-95-99377300800918/AnsiballZ_stat.py'
Nov 25 10:21:26 compute-0 sudo[162739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:26 compute-0 python3.9[162741]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:26 compute-0 sudo[162739]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:26 compute-0 sudo[162862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drbhbvjfkpqoepqikksbcdkjsblshypn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066085.9774427-95-99377300800918/AnsiballZ_copy.py'
Nov 25 10:21:26 compute-0 sudo[162862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:27 compute-0 python3.9[162864]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066085.9774427-95-99377300800918/.source.iscsi _original_basename=.wkroy57w follow=False checksum=2c64eb85dfb8909dff623ea4b95e934953f206e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:27 compute-0 sudo[162862]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:27 compute-0 sudo[163014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxmlywhuvatpodbuuzpnhxawpjvxggfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066087.3603048-110-95417286770166/AnsiballZ_file.py'
Nov 25 10:21:27 compute-0 sudo[163014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:28 compute-0 python3.9[163016]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:28 compute-0 sudo[163014]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:28 compute-0 sudo[163166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thfaiqornccdbnmhyvanptqlphpbpzfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066088.215234-118-92637351906411/AnsiballZ_lineinfile.py'
Nov 25 10:21:28 compute-0 sudo[163166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:28 compute-0 python3.9[163168]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:28 compute-0 sudo[163166]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:28 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:21:28 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:21:28 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:21:29 compute-0 sudo[163319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcrhfbuydnxjrtgkxqtivhojqsibqsnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066089.0640779-127-187974422619162/AnsiballZ_systemd_service.py'
Nov 25 10:21:29 compute-0 sudo[163319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:29 compute-0 python3.9[163321]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:21:29 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 25 10:21:30 compute-0 sudo[163319]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:30 compute-0 sudo[163475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kccohepxnmycuppvpcfwismabfatnuoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066090.1933491-135-980071023869/AnsiballZ_systemd_service.py'
Nov 25 10:21:30 compute-0 sudo[163475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:30 compute-0 python3.9[163477]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:21:30 compute-0 systemd[1]: Reloading.
Nov 25 10:21:30 compute-0 systemd-rc-local-generator[163504]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:21:30 compute-0 systemd-sysv-generator[163510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:21:31 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 10:21:31 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 10:21:31 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 25 10:21:31 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 10:21:31 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 25 10:21:31 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 25 10:21:31 compute-0 sudo[163475]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:31 compute-0 sudo[163677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyhliicatyuvnewddjnsixdzecullwah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066091.500078-146-53296853023180/AnsiballZ_service_facts.py'
Nov 25 10:21:31 compute-0 sudo[163677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:31 compute-0 python3.9[163679]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:21:32 compute-0 network[163696]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:21:32 compute-0 network[163697]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:21:32 compute-0 network[163698]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:21:34 compute-0 sudo[163677]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:35 compute-0 sudo[163967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgnqxsuksyiuadvappcslpupokpocigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066095.0608284-156-88242536198710/AnsiballZ_file.py'
Nov 25 10:21:35 compute-0 sudo[163967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:35 compute-0 python3.9[163969]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 10:21:35 compute-0 sudo[163967]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:21:36.009 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:21:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:21:36.011 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:21:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:21:36.011 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:21:36 compute-0 sudo[164119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfnzfpmggtwdatmdpfqkiuvztpoizzvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066095.899483-164-13276373385404/AnsiballZ_modprobe.py'
Nov 25 10:21:36 compute-0 sudo[164119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:36 compute-0 python3.9[164121]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 25 10:21:36 compute-0 sudo[164119]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:37 compute-0 sudo[164275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvraoynnvfgwsrxmycyqsvobmgcgnfgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066096.780486-172-72243618793870/AnsiballZ_stat.py'
Nov 25 10:21:37 compute-0 sudo[164275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:37 compute-0 python3.9[164277]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:37 compute-0 sudo[164275]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:37 compute-0 sudo[164398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcuuworieyccbxtqxzbiuzjbcghwbgrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066096.780486-172-72243618793870/AnsiballZ_copy.py'
Nov 25 10:21:37 compute-0 sudo[164398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:37 compute-0 python3.9[164400]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066096.780486-172-72243618793870/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:37 compute-0 sudo[164398]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:38 compute-0 sudo[164550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orckvguwwztilusganaasjkyezixmmcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066097.9616091-188-184161627604942/AnsiballZ_lineinfile.py'
Nov 25 10:21:38 compute-0 sudo[164550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:38 compute-0 python3.9[164552]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:38 compute-0 sudo[164550]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:39 compute-0 sudo[164716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqvcsnxxgueroxrtysnekzgnlymfnvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066098.5702825-196-176202806361013/AnsiballZ_systemd.py'
Nov 25 10:21:39 compute-0 sudo[164716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:39 compute-0 podman[164676]: 2025-11-25 10:21:39.20278459 +0000 UTC m=+0.088218506 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 10:21:39 compute-0 python3.9[164722]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:21:39 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 10:21:39 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 10:21:39 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 10:21:39 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 10:21:39 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 10:21:39 compute-0 sudo[164716]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:40 compute-0 sudo[164877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvikismvwjchhaunhanwiqgspzvjnfsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066099.807905-204-1490645519568/AnsiballZ_file.py'
Nov 25 10:21:40 compute-0 sudo[164877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:40 compute-0 python3.9[164879]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:21:40 compute-0 sudo[164877]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:40 compute-0 sudo[165029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkwwbttxcihbglritasfaudentsylfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066100.6271496-213-30935047192213/AnsiballZ_stat.py'
Nov 25 10:21:40 compute-0 sudo[165029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:41 compute-0 python3.9[165031]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:21:41 compute-0 sudo[165029]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:41 compute-0 sudo[165181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hleuoiguziojiahxlazlpduljtgjxduq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066101.313293-222-22477467725966/AnsiballZ_stat.py'
Nov 25 10:21:41 compute-0 sudo[165181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:41 compute-0 python3.9[165183]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:21:41 compute-0 sudo[165181]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:42 compute-0 sudo[165333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwsietralesxjysuahtrfabervctyrsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066101.9932537-230-177817392197951/AnsiballZ_stat.py'
Nov 25 10:21:42 compute-0 sudo[165333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:42 compute-0 python3.9[165335]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:42 compute-0 sudo[165333]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:42 compute-0 sudo[165456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgqchijehoilwvwohzlxcjqzuoulclhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066101.9932537-230-177817392197951/AnsiballZ_copy.py'
Nov 25 10:21:42 compute-0 sudo[165456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:43 compute-0 python3.9[165458]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066101.9932537-230-177817392197951/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:43 compute-0 sudo[165456]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:43 compute-0 sudo[165608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgkotdayuqgdlvqgwhbnurlzmoihdeuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066103.2498922-245-238970458869296/AnsiballZ_command.py'
Nov 25 10:21:43 compute-0 sudo[165608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:43 compute-0 python3.9[165610]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:21:44 compute-0 sudo[165608]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:44 compute-0 podman[165612]: 2025-11-25 10:21:44.980724868 +0000 UTC m=+0.093229222 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:21:45 compute-0 sudo[165787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bedoiiefdzmznqtcfckgjnmghngzrozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066105.0505927-253-240303571061983/AnsiballZ_lineinfile.py'
Nov 25 10:21:45 compute-0 sudo[165787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:45 compute-0 python3.9[165789]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:45 compute-0 sudo[165787]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:46 compute-0 sudo[165939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewldzaooloaoosorsxwxuavmkvbhfaps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066105.6907902-261-61123752511245/AnsiballZ_replace.py'
Nov 25 10:21:46 compute-0 sudo[165939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:46 compute-0 python3.9[165941]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:46 compute-0 sudo[165939]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:46 compute-0 sudo[166091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrfvoreuhqqpcrpsziwoojwufzvzaenb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066106.4609885-269-149012822387398/AnsiballZ_replace.py'
Nov 25 10:21:46 compute-0 sudo[166091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:46 compute-0 python3.9[166093]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:46 compute-0 sudo[166091]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:47 compute-0 sudo[166243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvzqeuajdrpufcklqxuhmculdefctdid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066107.163696-278-29198773941189/AnsiballZ_lineinfile.py'
Nov 25 10:21:47 compute-0 sudo[166243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:47 compute-0 python3.9[166245]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:47 compute-0 sudo[166243]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:47 compute-0 sudo[166397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbynvozxctipacsctzbqbdprudpsbmpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066107.72266-278-97070346303823/AnsiballZ_lineinfile.py'
Nov 25 10:21:47 compute-0 sudo[166397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:48 compute-0 python3.9[166399]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:48 compute-0 sudo[166397]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:48 compute-0 sudo[166549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adhezexhbrgsounxudvwmfgesuujjvlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066108.5440383-278-183527016853919/AnsiballZ_lineinfile.py'
Nov 25 10:21:48 compute-0 sudo[166549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:48 compute-0 sshd-session[166293]: Connection closed by authenticating user root 171.244.51.45 port 47018 [preauth]
Nov 25 10:21:49 compute-0 python3.9[166551]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:49 compute-0 sudo[166549]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:49 compute-0 sudo[166701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktrexsnmjpxcuchglvdostxkkvhvrlht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066109.4383905-278-107854003474401/AnsiballZ_lineinfile.py'
Nov 25 10:21:49 compute-0 sudo[166701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:49 compute-0 python3.9[166703]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:49 compute-0 sudo[166701]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:50 compute-0 sudo[166853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unjavxeoxfgpbidsbhpvtxtuaqrwaisf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066110.0737336-307-81600388486662/AnsiballZ_stat.py'
Nov 25 10:21:50 compute-0 sudo[166853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:50 compute-0 python3.9[166855]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:21:50 compute-0 sudo[166853]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:51 compute-0 sudo[167007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqvdcnthhfznzmudzzlvyjumgemrntlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066110.7779567-315-22483800342017/AnsiballZ_file.py'
Nov 25 10:21:51 compute-0 sudo[167007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:51 compute-0 python3.9[167009]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:51 compute-0 sudo[167007]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:51 compute-0 sudo[167159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyymuplxwgpzziykfjzmbmrdjrredbax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066111.5618398-324-263791959216042/AnsiballZ_file.py'
Nov 25 10:21:51 compute-0 sudo[167159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:52 compute-0 python3.9[167161]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:21:52 compute-0 sudo[167159]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:52 compute-0 sudo[167311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xalhtimdubizxqovoteajpaydtcqgdjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066112.2318592-332-90513353161754/AnsiballZ_stat.py'
Nov 25 10:21:52 compute-0 sudo[167311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:52 compute-0 python3.9[167313]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:52 compute-0 sudo[167311]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:52 compute-0 sudo[167389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbivpnaxooedewyfzejortehxmdtmauf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066112.2318592-332-90513353161754/AnsiballZ_file.py'
Nov 25 10:21:52 compute-0 sudo[167389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:53 compute-0 python3.9[167391]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:21:53 compute-0 sudo[167389]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:53 compute-0 sudo[167541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjsxpnhzplfxwwugjafrswsktgccmdcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066113.2791803-332-64044757888267/AnsiballZ_stat.py'
Nov 25 10:21:53 compute-0 sudo[167541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:53 compute-0 python3.9[167543]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:53 compute-0 sudo[167541]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:54 compute-0 sudo[167619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olevvoeyczgxmoeulkjzlndsvgmxcdpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066113.2791803-332-64044757888267/AnsiballZ_file.py'
Nov 25 10:21:54 compute-0 sudo[167619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:54 compute-0 python3.9[167621]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:21:54 compute-0 sudo[167619]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:54 compute-0 sudo[167771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbcxnrhtwutzcwwertfguwrgwuhghnjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066114.3719015-355-218530750813720/AnsiballZ_file.py'
Nov 25 10:21:54 compute-0 sudo[167771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:54 compute-0 python3.9[167773]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:54 compute-0 sudo[167771]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:55 compute-0 sudo[167923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcerbskxfruyxfrowxiudmweyiqugnuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066115.0007007-363-207784197569911/AnsiballZ_stat.py'
Nov 25 10:21:55 compute-0 sudo[167923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:55 compute-0 python3.9[167925]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:55 compute-0 sudo[167923]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:55 compute-0 sudo[168001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noiygdznrsiwgwoiorrtxbwcjmehmcam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066115.0007007-363-207784197569911/AnsiballZ_file.py'
Nov 25 10:21:55 compute-0 sudo[168001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:55 compute-0 python3.9[168003]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:55 compute-0 sudo[168001]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:56 compute-0 sudo[168153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrqcdgehgufdvxlkfrsowlcvxdexzmfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066116.0455956-375-249311736198554/AnsiballZ_stat.py'
Nov 25 10:21:56 compute-0 sudo[168153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:56 compute-0 python3.9[168155]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:56 compute-0 sudo[168153]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:56 compute-0 sudo[168231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwmmrewntmelhgiqzsizpznjdioplpku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066116.0455956-375-249311736198554/AnsiballZ_file.py'
Nov 25 10:21:56 compute-0 sudo[168231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:56 compute-0 python3.9[168233]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:56 compute-0 sudo[168231]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:57 compute-0 sudo[168383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hejyiqejglhhcjepxrjiulbcjpjbdfnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066117.1551838-387-237240553370797/AnsiballZ_systemd.py'
Nov 25 10:21:57 compute-0 sudo[168383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:57 compute-0 python3.9[168385]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:21:57 compute-0 systemd[1]: Reloading.
Nov 25 10:21:57 compute-0 systemd-sysv-generator[168415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:21:57 compute-0 systemd-rc-local-generator[168409]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:21:58 compute-0 sudo[168383]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:58 compute-0 sudo[168571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxmmqotcefhegpeusngywbkkkqhguspa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066118.3228288-395-145930178579244/AnsiballZ_stat.py'
Nov 25 10:21:58 compute-0 sudo[168571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:58 compute-0 python3.9[168573]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:58 compute-0 sudo[168571]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:59 compute-0 sudo[168649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuzueclklyuplmyjkrevkxmlguojxqkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066118.3228288-395-145930178579244/AnsiballZ_file.py'
Nov 25 10:21:59 compute-0 sudo[168649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:59 compute-0 python3.9[168651]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:21:59 compute-0 sudo[168649]: pam_unix(sudo:session): session closed for user root
Nov 25 10:21:59 compute-0 sudo[168801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdwfzenphofoxdndybxagkasthjqwssp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066119.5388863-407-233741233486469/AnsiballZ_stat.py'
Nov 25 10:21:59 compute-0 sudo[168801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:21:59 compute-0 python3.9[168803]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:21:59 compute-0 sudo[168801]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:00 compute-0 sudo[168879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyykezfmsozanrkctdglaruxyavnfame ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066119.5388863-407-233741233486469/AnsiballZ_file.py'
Nov 25 10:22:00 compute-0 sudo[168879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:00 compute-0 python3.9[168881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:00 compute-0 sudo[168879]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:00 compute-0 sudo[169031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeomouitktbzjmxwkkpotylqcfiwoqwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066120.5731776-419-226870558796089/AnsiballZ_systemd.py'
Nov 25 10:22:00 compute-0 sudo[169031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:01 compute-0 python3.9[169033]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:01 compute-0 systemd[1]: Reloading.
Nov 25 10:22:01 compute-0 systemd-rc-local-generator[169060]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:22:01 compute-0 systemd-sysv-generator[169065]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:22:01 compute-0 systemd[1]: Starting Create netns directory...
Nov 25 10:22:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 25 10:22:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 25 10:22:01 compute-0 systemd[1]: Finished Create netns directory.
Nov 25 10:22:01 compute-0 sudo[169031]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:02 compute-0 sudo[169225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyqkkulbhcsexiitvdjoqapudwnqbzep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066121.755555-429-151494467946494/AnsiballZ_file.py'
Nov 25 10:22:02 compute-0 sudo[169225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:02 compute-0 python3.9[169227]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:22:02 compute-0 sudo[169225]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:02 compute-0 sudo[169377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhmjecrbuwdvjphiqokzrfmtojfzeomf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066122.5665433-437-212902672583681/AnsiballZ_stat.py'
Nov 25 10:22:02 compute-0 sudo[169377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:03 compute-0 python3.9[169379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:22:03 compute-0 sudo[169377]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:03 compute-0 sudo[169500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsdrvlutwkpwwqrosbfuoxonbgmhainr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066122.5665433-437-212902672583681/AnsiballZ_copy.py'
Nov 25 10:22:03 compute-0 sudo[169500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:03 compute-0 python3.9[169502]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066122.5665433-437-212902672583681/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:22:03 compute-0 sudo[169500]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:04 compute-0 sudo[169652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbsczlswfxxqfabjywifcxbrmjuufzhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066124.1747732-454-160583383138187/AnsiballZ_file.py'
Nov 25 10:22:04 compute-0 sudo[169652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:04 compute-0 python3.9[169654]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:22:04 compute-0 sudo[169652]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:05 compute-0 sudo[169804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfglwvqilqypmzawcqdjktlyscbwzpti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066124.8775656-462-81798623008794/AnsiballZ_stat.py'
Nov 25 10:22:05 compute-0 sudo[169804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:05 compute-0 python3.9[169806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:22:05 compute-0 sudo[169804]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:05 compute-0 sudo[169927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucmgieixskzhaqqclhidemmoohyxvzxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066124.8775656-462-81798623008794/AnsiballZ_copy.py'
Nov 25 10:22:05 compute-0 sudo[169927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:05 compute-0 python3.9[169929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066124.8775656-462-81798623008794/.source.json _original_basename=.004etcpl follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:05 compute-0 sudo[169927]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:06 compute-0 sudo[170079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsasclbdxodcwkkqxzvwjiblmhrnfcey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066126.1081254-477-266714698040336/AnsiballZ_file.py'
Nov 25 10:22:06 compute-0 sudo[170079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:06 compute-0 python3.9[170081]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:06 compute-0 sudo[170079]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:07 compute-0 sudo[170231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vecpvzmozzagejswxjnwyqwdyftxatel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066126.8079743-485-152898586028501/AnsiballZ_stat.py'
Nov 25 10:22:07 compute-0 sudo[170231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:07 compute-0 sudo[170231]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:07 compute-0 sudo[170354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqhqbwxmuaazkqvwccqlfznhdnzudtyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066126.8079743-485-152898586028501/AnsiballZ_copy.py'
Nov 25 10:22:07 compute-0 sudo[170354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:07 compute-0 sudo[170354]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:08 compute-0 sudo[170506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfpiriclmddnfbkliuwjqfiaiurllmwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066128.1885438-502-163525581057846/AnsiballZ_container_config_data.py'
Nov 25 10:22:08 compute-0 sudo[170506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:08 compute-0 python3.9[170508]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 25 10:22:08 compute-0 sudo[170506]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:09 compute-0 sudo[170667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnkepsbuzkvdweixlyehinbdyrkqodqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066129.135276-511-127258696269456/AnsiballZ_container_config_hash.py'
Nov 25 10:22:09 compute-0 sudo[170667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:09 compute-0 podman[170632]: 2025-11-25 10:22:09.734455238 +0000 UTC m=+0.053304474 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:22:09 compute-0 python3.9[170680]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:22:09 compute-0 sudo[170667]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:10 compute-0 sudo[170830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nronaxanqonmfcdbmxlgbgwlfhgzhdiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066130.1795845-520-216708525849704/AnsiballZ_podman_container_info.py'
Nov 25 10:22:10 compute-0 sudo[170830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:10 compute-0 python3.9[170832]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 25 10:22:10 compute-0 sudo[170830]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:11 compute-0 sudo[171008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swnrwvoerbquyuzjyepwjlwlrzupgyai ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066131.4694479-533-36907417714272/AnsiballZ_edpm_container_manage.py'
Nov 25 10:22:11 compute-0 sudo[171008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:12 compute-0 python3[171010]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:22:12 compute-0 podman[171048]: 2025-11-25 10:22:12.358112509 +0000 UTC m=+0.043204519 container create b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 25 10:22:12 compute-0 podman[171048]: 2025-11-25 10:22:12.333525019 +0000 UTC m=+0.018617049 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 10:22:12 compute-0 python3[171010]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 25 10:22:12 compute-0 sudo[171008]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:12 compute-0 sudo[171236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyjufozytjtbmhgqbbcysqbbshukwuhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066132.6588361-541-37985446782560/AnsiballZ_stat.py'
Nov 25 10:22:12 compute-0 sudo[171236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:13 compute-0 python3.9[171238]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:22:13 compute-0 sudo[171236]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:13 compute-0 sudo[171390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfnkmtvehbntpkiuaelecjznyctpghjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066133.54359-550-190103124970994/AnsiballZ_file.py'
Nov 25 10:22:13 compute-0 sudo[171390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:13 compute-0 python3.9[171392]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:14 compute-0 sudo[171390]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:14 compute-0 sudo[171466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgugzfalfhapmkgqclxmrkrwwoqvmcza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066133.54359-550-190103124970994/AnsiballZ_stat.py'
Nov 25 10:22:14 compute-0 sudo[171466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:14 compute-0 python3.9[171468]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:22:14 compute-0 sudo[171466]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:14 compute-0 sudo[171617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyfqcexfuvjbryvlrllkfmpqdkllwtoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066134.5082347-550-715923018661/AnsiballZ_copy.py'
Nov 25 10:22:14 compute-0 sudo[171617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:15 compute-0 python3.9[171619]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066134.5082347-550-715923018661/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:15 compute-0 sudo[171617]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:15 compute-0 sudo[171707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwjkvfiiaybbyxtcnhezqdmhagwptjwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066134.5082347-550-715923018661/AnsiballZ_systemd.py'
Nov 25 10:22:15 compute-0 sudo[171707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:15 compute-0 podman[171667]: 2025-11-25 10:22:15.460532272 +0000 UTC m=+0.102411503 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:22:15 compute-0 python3.9[171715]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:22:15 compute-0 systemd[1]: Reloading.
Nov 25 10:22:15 compute-0 systemd-rc-local-generator[171749]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:22:15 compute-0 systemd-sysv-generator[171752]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:22:16 compute-0 sudo[171707]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:16 compute-0 sudo[171830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktxkadmuuonzwcolhgfulmeanayranjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066134.5082347-550-715923018661/AnsiballZ_systemd.py'
Nov 25 10:22:16 compute-0 sudo[171830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:16 compute-0 python3.9[171832]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:16 compute-0 systemd[1]: Reloading.
Nov 25 10:22:16 compute-0 systemd-sysv-generator[171864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:22:16 compute-0 systemd-rc-local-generator[171860]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:22:16 compute-0 systemd[1]: Starting multipathd container...
Nov 25 10:22:17 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad59e244f32e82ebb2c58453476c10bd4ac9719a51b12184bc4ecad0144e86cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 10:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad59e244f32e82ebb2c58453476c10bd4ac9719a51b12184bc4ecad0144e86cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 10:22:17 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.
Nov 25 10:22:17 compute-0 podman[171871]: 2025-11-25 10:22:17.348019746 +0000 UTC m=+0.411616973 container init b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:22:17 compute-0 multipathd[171887]: + sudo -E kolla_set_configs
Nov 25 10:22:17 compute-0 podman[171871]: 2025-11-25 10:22:17.369993804 +0000 UTC m=+0.433591021 container start b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 25 10:22:17 compute-0 sudo[171893]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 10:22:17 compute-0 sudo[171893]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:22:17 compute-0 sudo[171893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 10:22:17 compute-0 multipathd[171887]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:22:17 compute-0 multipathd[171887]: INFO:__main__:Validating config file
Nov 25 10:22:17 compute-0 multipathd[171887]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:22:17 compute-0 multipathd[171887]: INFO:__main__:Writing out command to execute
Nov 25 10:22:17 compute-0 sudo[171893]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:17 compute-0 multipathd[171887]: ++ cat /run_command
Nov 25 10:22:17 compute-0 multipathd[171887]: + CMD='/usr/sbin/multipathd -d'
Nov 25 10:22:17 compute-0 multipathd[171887]: + ARGS=
Nov 25 10:22:17 compute-0 multipathd[171887]: + sudo kolla_copy_cacerts
Nov 25 10:22:17 compute-0 sudo[171908]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 10:22:17 compute-0 sudo[171908]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:22:17 compute-0 sudo[171908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 10:22:17 compute-0 sudo[171908]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:17 compute-0 multipathd[171887]: + [[ ! -n '' ]]
Nov 25 10:22:17 compute-0 multipathd[171887]: + . kolla_extend_start
Nov 25 10:22:17 compute-0 multipathd[171887]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 10:22:17 compute-0 multipathd[171887]: Running command: '/usr/sbin/multipathd -d'
Nov 25 10:22:17 compute-0 multipathd[171887]: + umask 0022
Nov 25 10:22:17 compute-0 multipathd[171887]: + exec /usr/sbin/multipathd -d
Nov 25 10:22:17 compute-0 multipathd[171887]: 3045.103170 | --------start up--------
Nov 25 10:22:17 compute-0 multipathd[171887]: 3045.103205 | read /etc/multipath.conf
Nov 25 10:22:17 compute-0 multipathd[171887]: 3045.109875 | path checkers start up
Nov 25 10:22:17 compute-0 podman[171871]: multipathd
Nov 25 10:22:17 compute-0 systemd[1]: Started multipathd container.
Nov 25 10:22:17 compute-0 sudo[171830]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:17 compute-0 podman[171894]: 2025-11-25 10:22:17.615122827 +0000 UTC m=+0.234334039 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 10:22:18 compute-0 python3.9[172076]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:22:18 compute-0 sudo[172228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqylujmtszbvszxpiwviljhnzvqsdptx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066138.3958573-586-206291653019937/AnsiballZ_command.py'
Nov 25 10:22:18 compute-0 sudo[172228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:18 compute-0 python3.9[172230]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:22:18 compute-0 sudo[172228]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:19 compute-0 sudo[172393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tihftmsmeortdctfmzcxlyiwaahjgqxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066139.1089423-594-30816627548889/AnsiballZ_systemd.py'
Nov 25 10:22:19 compute-0 sudo[172393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:19 compute-0 python3.9[172395]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:22:19 compute-0 systemd[1]: Stopping multipathd container...
Nov 25 10:22:20 compute-0 multipathd[171887]: 3047.876187 | exit (signal)
Nov 25 10:22:20 compute-0 multipathd[171887]: 3047.876244 | --------shut down-------
Nov 25 10:22:20 compute-0 systemd[1]: libpod-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope: Deactivated successfully.
Nov 25 10:22:20 compute-0 podman[172399]: 2025-11-25 10:22:20.269118775 +0000 UTC m=+0.480681674 container died b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 10:22:20 compute-0 systemd[1]: b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8-5b83a8bccafbad09.timer: Deactivated successfully.
Nov 25 10:22:20 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.
Nov 25 10:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8-userdata-shm.mount: Deactivated successfully.
Nov 25 10:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad59e244f32e82ebb2c58453476c10bd4ac9719a51b12184bc4ecad0144e86cc-merged.mount: Deactivated successfully.
Nov 25 10:22:20 compute-0 podman[172399]: 2025-11-25 10:22:20.84777801 +0000 UTC m=+1.059340899 container cleanup b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 10:22:20 compute-0 podman[172399]: multipathd
Nov 25 10:22:20 compute-0 podman[172427]: multipathd
Nov 25 10:22:20 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 25 10:22:20 compute-0 systemd[1]: Stopped multipathd container.
Nov 25 10:22:20 compute-0 systemd[1]: Starting multipathd container...
Nov 25 10:22:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:22:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad59e244f32e82ebb2c58453476c10bd4ac9719a51b12184bc4ecad0144e86cc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 10:22:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad59e244f32e82ebb2c58453476c10bd4ac9719a51b12184bc4ecad0144e86cc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 10:22:21 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.
Nov 25 10:22:21 compute-0 podman[172441]: 2025-11-25 10:22:21.424224513 +0000 UTC m=+0.482726639 container init b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 25 10:22:21 compute-0 multipathd[172456]: + sudo -E kolla_set_configs
Nov 25 10:22:21 compute-0 podman[172441]: 2025-11-25 10:22:21.447811036 +0000 UTC m=+0.506313142 container start b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 10:22:21 compute-0 sudo[172462]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 10:22:21 compute-0 sudo[172462]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:22:21 compute-0 sudo[172462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 10:22:21 compute-0 multipathd[172456]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:22:21 compute-0 multipathd[172456]: INFO:__main__:Validating config file
Nov 25 10:22:21 compute-0 multipathd[172456]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:22:21 compute-0 multipathd[172456]: INFO:__main__:Writing out command to execute
Nov 25 10:22:21 compute-0 sudo[172462]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:21 compute-0 multipathd[172456]: ++ cat /run_command
Nov 25 10:22:21 compute-0 multipathd[172456]: + CMD='/usr/sbin/multipathd -d'
Nov 25 10:22:21 compute-0 multipathd[172456]: + ARGS=
Nov 25 10:22:21 compute-0 multipathd[172456]: + sudo kolla_copy_cacerts
Nov 25 10:22:21 compute-0 sudo[172475]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 10:22:21 compute-0 sudo[172475]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:22:21 compute-0 sudo[172475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 25 10:22:21 compute-0 sudo[172475]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:21 compute-0 multipathd[172456]: Running command: '/usr/sbin/multipathd -d'
Nov 25 10:22:21 compute-0 multipathd[172456]: + [[ ! -n '' ]]
Nov 25 10:22:21 compute-0 multipathd[172456]: + . kolla_extend_start
Nov 25 10:22:21 compute-0 multipathd[172456]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 25 10:22:21 compute-0 multipathd[172456]: + umask 0022
Nov 25 10:22:21 compute-0 multipathd[172456]: + exec /usr/sbin/multipathd -d
Nov 25 10:22:21 compute-0 multipathd[172456]: 3049.185209 | --------start up--------
Nov 25 10:22:21 compute-0 multipathd[172456]: 3049.185237 | read /etc/multipath.conf
Nov 25 10:22:21 compute-0 multipathd[172456]: 3049.191179 | path checkers start up
Nov 25 10:22:21 compute-0 podman[172441]: multipathd
Nov 25 10:22:21 compute-0 systemd[1]: Started multipathd container.
Nov 25 10:22:21 compute-0 sudo[172393]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:21 compute-0 podman[172463]: 2025-11-25 10:22:21.719383759 +0000 UTC m=+0.260071310 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:22:22 compute-0 sudo[172644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnmqosiqvwgnyfzocvrhswpuobdufsvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066141.8847554-602-139252828809971/AnsiballZ_file.py'
Nov 25 10:22:22 compute-0 sudo[172644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:22 compute-0 python3.9[172646]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:22 compute-0 sudo[172644]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:22 compute-0 sudo[172796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueivnbcbluqpibvdpyohiviidgvqhmst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066142.6985824-614-150289583230944/AnsiballZ_file.py'
Nov 25 10:22:22 compute-0 sudo[172796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:23 compute-0 python3.9[172798]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 25 10:22:23 compute-0 sudo[172796]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:23 compute-0 sudo[172948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lozxapnyvghqyzoxksijnugnubnqifss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066143.3443902-622-225498184154830/AnsiballZ_modprobe.py'
Nov 25 10:22:23 compute-0 sudo[172948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:23 compute-0 python3.9[172950]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 25 10:22:23 compute-0 kernel: Key type psk registered
Nov 25 10:22:23 compute-0 sudo[172948]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:24 compute-0 sudo[173110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdggeolucjeunuvbulanshpbgjyvfbvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066144.1699255-630-71864210007156/AnsiballZ_stat.py'
Nov 25 10:22:24 compute-0 sudo[173110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:24 compute-0 python3.9[173112]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:22:24 compute-0 sudo[173110]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:25 compute-0 sudo[173233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocrarfpmctvdnhwjxbnlzbkpofntukan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066144.1699255-630-71864210007156/AnsiballZ_copy.py'
Nov 25 10:22:25 compute-0 sudo[173233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:25 compute-0 python3.9[173235]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066144.1699255-630-71864210007156/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:25 compute-0 sudo[173233]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:25 compute-0 sudo[173385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adczpfmrywlclbjgnblufsutvsyapgnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066145.6010075-646-260442518629551/AnsiballZ_lineinfile.py'
Nov 25 10:22:25 compute-0 sudo[173385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:26 compute-0 python3.9[173387]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:26 compute-0 sudo[173385]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:26 compute-0 sudo[173537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulzblkdrxsrzphronkvwnvjnabbuvrdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066146.293141-654-264110012230724/AnsiballZ_systemd.py'
Nov 25 10:22:26 compute-0 sudo[173537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:26 compute-0 python3.9[173539]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:22:26 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 25 10:22:26 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 25 10:22:26 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 25 10:22:27 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 25 10:22:27 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 25 10:22:27 compute-0 sudo[173537]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:27 compute-0 sudo[173693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxuwlulczhhjxbvdbjilaygfnnqfjsmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066147.3614874-662-77232518672434/AnsiballZ_dnf.py'
Nov 25 10:22:27 compute-0 sudo[173693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:27 compute-0 python3.9[173695]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:22:29 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 25 10:22:30 compute-0 systemd[1]: Reloading.
Nov 25 10:22:30 compute-0 systemd-rc-local-generator[173724]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:22:30 compute-0 systemd-sysv-generator[173731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:22:31 compute-0 systemd[1]: Reloading.
Nov 25 10:22:31 compute-0 systemd-rc-local-generator[173764]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:22:31 compute-0 systemd-sysv-generator[173767]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:22:31 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 10:22:31 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 25 10:22:31 compute-0 systemd-logind[822]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 25 10:22:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 25 10:22:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 25 10:22:31 compute-0 systemd[1]: Reloading.
Nov 25 10:22:31 compute-0 systemd-sysv-generator[173862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:22:31 compute-0 systemd-rc-local-generator[173859]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:22:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 25 10:22:32 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 25 10:22:32 compute-0 sudo[173693]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:33 compute-0 sudo[175147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgifxvyymxwtizvywbuwounvulepgwut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066153.0785272-670-118194527771017/AnsiballZ_systemd_service.py'
Nov 25 10:22:33 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 25 10:22:33 compute-0 sudo[175147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:33 compute-0 python3.9[175150]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:22:33 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 25 10:22:33 compute-0 iscsid[163518]: iscsid shutting down.
Nov 25 10:22:33 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 25 10:22:33 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 25 10:22:33 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 25 10:22:33 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 25 10:22:33 compute-0 systemd[1]: Started Open-iSCSI.
Nov 25 10:22:33 compute-0 sudo[175147]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:34 compute-0 python3.9[175305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:22:35 compute-0 sudo[175459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqijvtewwdymqukvopezqjytiqgboeds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066155.0354352-688-54852804601937/AnsiballZ_file.py'
Nov 25 10:22:35 compute-0 sudo[175459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:35 compute-0 python3.9[175461]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:35 compute-0 sudo[175459]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 25 10:22:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 25 10:22:35 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.585s CPU time.
Nov 25 10:22:35 compute-0 systemd[1]: run-r91d61492b3174d7f8c7a87b1af8a37c6.service: Deactivated successfully.
Nov 25 10:22:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:22:36.010 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:22:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:22:36.011 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:22:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:22:36.011 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:22:36 compute-0 sudo[175612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgfmxwejjukabcormqukqhpllftouwso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066155.8624835-699-42166088164867/AnsiballZ_systemd_service.py'
Nov 25 10:22:36 compute-0 sudo[175612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:36 compute-0 python3.9[175614]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:22:36 compute-0 systemd[1]: Reloading.
Nov 25 10:22:36 compute-0 systemd-rc-local-generator[175642]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:22:36 compute-0 systemd-sysv-generator[175645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:22:36 compute-0 sudo[175612]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:37 compute-0 python3.9[175800]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:22:37 compute-0 network[175817]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:22:37 compute-0 network[175818]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:22:37 compute-0 network[175819]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:22:39 compute-0 podman[175880]: 2025-11-25 10:22:39.940477959 +0000 UTC m=+0.057991622 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:22:42 compute-0 sudo[176111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evqrljjahjpfrwnfvdhaxayojqfqzonn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066162.6184845-718-173872418278518/AnsiballZ_systemd_service.py'
Nov 25 10:22:42 compute-0 sudo[176111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:43 compute-0 python3.9[176113]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:43 compute-0 sudo[176111]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:43 compute-0 sudo[176264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckksreswyouujubgyynlpvqdrkgzbsph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066163.3251452-718-175529050243361/AnsiballZ_systemd_service.py'
Nov 25 10:22:43 compute-0 sudo[176264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:43 compute-0 python3.9[176266]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:43 compute-0 sudo[176264]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:44 compute-0 sudo[176417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrndwenycewjkkzflrycniiddcgzwbfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066164.1675541-718-8201738177950/AnsiballZ_systemd_service.py'
Nov 25 10:22:44 compute-0 sudo[176417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:44 compute-0 python3.9[176419]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:44 compute-0 sudo[176417]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:45 compute-0 sudo[176570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzlwfibewuebwhmihhotsbnnorwabuqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066164.8630702-718-245833789510844/AnsiballZ_systemd_service.py'
Nov 25 10:22:45 compute-0 sudo[176570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:45 compute-0 python3.9[176572]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:45 compute-0 sudo[176570]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:45 compute-0 podman[176574]: 2025-11-25 10:22:45.60662277 +0000 UTC m=+0.077978467 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 10:22:45 compute-0 sudo[176750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnnkfqfeniqqjtuhyopydkydaaoqqpsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066165.665149-718-73400267101036/AnsiballZ_systemd_service.py'
Nov 25 10:22:45 compute-0 sudo[176750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:46 compute-0 python3.9[176752]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:46 compute-0 sudo[176750]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:46 compute-0 sudo[176903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnsgexyfxjsrfywilgetpaegqfwxkvza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066166.3729677-718-266659735491377/AnsiballZ_systemd_service.py'
Nov 25 10:22:46 compute-0 sudo[176903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:46 compute-0 python3.9[176905]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:47 compute-0 sudo[176903]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:47 compute-0 sudo[177056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojovdvpcqjoriacjlclvcwpegvnqnyhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066167.1320422-718-207654171180019/AnsiballZ_systemd_service.py'
Nov 25 10:22:47 compute-0 sudo[177056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:47 compute-0 python3.9[177058]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:47 compute-0 sudo[177056]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:48 compute-0 sudo[177209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwbzbcfrydlmrqmlozedzhnwzxtaltrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066167.9084642-718-230464984317645/AnsiballZ_systemd_service.py'
Nov 25 10:22:48 compute-0 sudo[177209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:48 compute-0 python3.9[177211]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:22:48 compute-0 sudo[177209]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:49 compute-0 sudo[177362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njnktanfhpyshvcpcfjslhwyfnjvthnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066168.8431532-777-85918541710214/AnsiballZ_file.py'
Nov 25 10:22:49 compute-0 sudo[177362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:49 compute-0 python3.9[177364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:49 compute-0 sudo[177362]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:49 compute-0 sudo[177514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykozoxklpxcvugkfqijttlxyxauzattd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066169.4478626-777-70704308096477/AnsiballZ_file.py'
Nov 25 10:22:49 compute-0 sudo[177514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:50 compute-0 python3.9[177516]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:50 compute-0 sudo[177514]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:50 compute-0 sudo[177666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oapuyzpiimrgsaddkltomgfkdpzthchi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066170.257852-777-93632626401077/AnsiballZ_file.py'
Nov 25 10:22:50 compute-0 sudo[177666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:50 compute-0 python3.9[177668]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:50 compute-0 sudo[177666]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:51 compute-0 sudo[177818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttxnsjsfzrtjbqqtfhvsetenrapcirxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066170.9742968-777-45872343011709/AnsiballZ_file.py'
Nov 25 10:22:51 compute-0 sudo[177818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:51 compute-0 python3.9[177820]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:51 compute-0 sudo[177818]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:51 compute-0 sudo[177983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boaqokneyicwsjsxtgwmbmnuujijkumj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066171.6470873-777-134095023182398/AnsiballZ_file.py'
Nov 25 10:22:51 compute-0 sudo[177983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:51 compute-0 podman[177944]: 2025-11-25 10:22:51.956400426 +0000 UTC m=+0.063252485 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 10:22:52 compute-0 python3.9[177989]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:52 compute-0 sudo[177983]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:52 compute-0 sudo[178140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgkxddlsftamnwpsbjvmhdgvsunbywz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066172.3401613-777-277747187373535/AnsiballZ_file.py'
Nov 25 10:22:52 compute-0 sudo[178140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:52 compute-0 python3.9[178142]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:52 compute-0 sudo[178140]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:53 compute-0 sudo[178292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeebtlddioicpwhaamfbakxztvqpvltn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066173.1392634-777-274617556780469/AnsiballZ_file.py'
Nov 25 10:22:53 compute-0 sudo[178292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:53 compute-0 python3.9[178294]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:53 compute-0 sudo[178292]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:54 compute-0 sudo[178444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqltrxyqomenjypahspoxabwqyaujyxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066173.771706-777-62774526610890/AnsiballZ_file.py'
Nov 25 10:22:54 compute-0 sudo[178444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:54 compute-0 python3.9[178446]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:54 compute-0 sudo[178444]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:54 compute-0 sudo[178596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fudydnaeapzckxblhyjsudjxamkamark ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066174.5542266-834-108058105449325/AnsiballZ_file.py'
Nov 25 10:22:54 compute-0 sudo[178596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:55 compute-0 python3.9[178598]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:55 compute-0 sudo[178596]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:55 compute-0 sudo[178748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjnvorgkrukiopjaoefnmluagpyymgsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066175.3594005-834-80599234014734/AnsiballZ_file.py'
Nov 25 10:22:55 compute-0 sudo[178748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:55 compute-0 python3.9[178750]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:55 compute-0 sudo[178748]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:56 compute-0 sudo[178900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzxjwqfjhlwrcjhzpbsnnxnrmzrhdxme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066176.1487288-834-29330609998954/AnsiballZ_file.py'
Nov 25 10:22:56 compute-0 sudo[178900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:56 compute-0 python3.9[178902]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:56 compute-0 sudo[178900]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:57 compute-0 sudo[179052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdcbbkxujbsbvvpxpkxepruovlrxqapf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066176.7319536-834-115075595249715/AnsiballZ_file.py'
Nov 25 10:22:57 compute-0 sudo[179052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:57 compute-0 python3.9[179054]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:57 compute-0 sudo[179052]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:57 compute-0 sudo[179204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azmxrivxzkvjgzahkbdadatibtyiioef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066177.3750887-834-52273592239306/AnsiballZ_file.py'
Nov 25 10:22:57 compute-0 sudo[179204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:57 compute-0 python3.9[179206]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:57 compute-0 sudo[179204]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:58 compute-0 sudo[179356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xshvsvyuechqbwfaestzfywflilnabfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066178.0521624-834-104980645356188/AnsiballZ_file.py'
Nov 25 10:22:58 compute-0 sudo[179356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:58 compute-0 python3.9[179358]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:58 compute-0 sudo[179356]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:58 compute-0 sudo[179508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frivormtcywlzjahvitecaswxlxogaey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066178.6121523-834-143645097003447/AnsiballZ_file.py'
Nov 25 10:22:58 compute-0 sudo[179508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:59 compute-0 python3.9[179510]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:59 compute-0 sudo[179508]: pam_unix(sudo:session): session closed for user root
Nov 25 10:22:59 compute-0 sudo[179660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrkivtdwvmqagqkzouftrqxsrtsqhnvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066179.177606-834-92505639309356/AnsiballZ_file.py'
Nov 25 10:22:59 compute-0 sudo[179660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:22:59 compute-0 python3.9[179662]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:22:59 compute-0 sudo[179660]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:00 compute-0 sudo[179812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtywtulpynptmfksitalhrlflstggonw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066180.0170918-892-188510345352926/AnsiballZ_command.py'
Nov 25 10:23:00 compute-0 sudo[179812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:00 compute-0 python3.9[179814]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:00 compute-0 sudo[179812]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:01 compute-0 python3.9[179966]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:23:01 compute-0 sudo[180116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcqouommooiifsbsfusnbbvumefiewt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066181.5911462-910-178723684487440/AnsiballZ_systemd_service.py'
Nov 25 10:23:01 compute-0 sudo[180116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:02 compute-0 python3.9[180118]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:23:02 compute-0 systemd[1]: Reloading.
Nov 25 10:23:02 compute-0 systemd-sysv-generator[180149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:23:02 compute-0 systemd-rc-local-generator[180144]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:23:02 compute-0 sudo[180116]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:03 compute-0 sudo[180303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ficoxuhxxtoylpgwjtyeljwktdtzekin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066182.7192469-918-209953849032147/AnsiballZ_command.py'
Nov 25 10:23:03 compute-0 sudo[180303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:03 compute-0 python3.9[180305]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:03 compute-0 sudo[180303]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:03 compute-0 sudo[180456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdflmhouvcwwwricfamxrgsppfqhoese ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066183.3939114-918-73929000706606/AnsiballZ_command.py'
Nov 25 10:23:03 compute-0 sudo[180456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:03 compute-0 python3.9[180458]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:03 compute-0 sudo[180456]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:04 compute-0 sudo[180609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hotlyqpqycttbzedeqsujxfxvdxncawe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066184.1032352-918-33535128318481/AnsiballZ_command.py'
Nov 25 10:23:04 compute-0 sudo[180609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:04 compute-0 python3.9[180611]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:04 compute-0 sudo[180609]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:05 compute-0 sudo[180762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdcxunnfojikkggyjrjaewtpuljddzls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066184.868837-918-215957267131723/AnsiballZ_command.py'
Nov 25 10:23:05 compute-0 sudo[180762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:05 compute-0 python3.9[180764]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:05 compute-0 sudo[180762]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:05 compute-0 sudo[180915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsoijcscnifzrmrlqovqyhaltdklvgpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066185.4753058-918-183614473703436/AnsiballZ_command.py'
Nov 25 10:23:05 compute-0 sudo[180915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:05 compute-0 python3.9[180917]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:05 compute-0 sudo[180915]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:06 compute-0 sudo[181068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oysswjjosjzcbosqwrrmjrejgduejbqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066186.0731733-918-279274695625160/AnsiballZ_command.py'
Nov 25 10:23:06 compute-0 sudo[181068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:06 compute-0 python3.9[181070]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:06 compute-0 sudo[181068]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:06 compute-0 sudo[181221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldtnblzldluvkhfiodicbqyopeyhqrqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066186.6915925-918-65631164073111/AnsiballZ_command.py'
Nov 25 10:23:06 compute-0 sudo[181221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:07 compute-0 python3.9[181223]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:07 compute-0 sudo[181221]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:07 compute-0 sudo[181374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpwpuptruznphedxkwessuvbziofmviz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066187.2674668-918-41726797629476/AnsiballZ_command.py'
Nov 25 10:23:07 compute-0 sudo[181374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:07 compute-0 python3.9[181376]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:23:07 compute-0 sudo[181374]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:09 compute-0 sudo[181527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mycmfevqrgymozuvwifnvqblysnbjkah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066188.778796-997-140052880451025/AnsiballZ_file.py'
Nov 25 10:23:09 compute-0 sudo[181527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:09 compute-0 python3.9[181529]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:09 compute-0 sudo[181527]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:09 compute-0 sudo[181679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fasuiorbzbqqjgonhrplugacvowdrmbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066189.3450742-997-76599518581255/AnsiballZ_file.py'
Nov 25 10:23:09 compute-0 sudo[181679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:10 compute-0 python3.9[181681]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:10 compute-0 sudo[181679]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:10 compute-0 podman[181682]: 2025-11-25 10:23:10.116469737 +0000 UTC m=+0.055811827 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 10:23:10 compute-0 sudo[181850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aumftzabyvmjssrbvbriaebcrvebojmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066190.2030463-997-262746554292192/AnsiballZ_file.py'
Nov 25 10:23:10 compute-0 sudo[181850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:10 compute-0 python3.9[181852]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:10 compute-0 sudo[181850]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:11 compute-0 sudo[182002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrtqklksjobypebuvzzbmzjnykaigtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066190.8534029-1019-201760066565880/AnsiballZ_file.py'
Nov 25 10:23:11 compute-0 sudo[182002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:11 compute-0 python3.9[182004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:11 compute-0 sudo[182002]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:11 compute-0 sudo[182154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdeznrfhdqbxmgvfnewblnqjiejarymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066191.422561-1019-189810780960328/AnsiballZ_file.py'
Nov 25 10:23:11 compute-0 sudo[182154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:11 compute-0 python3.9[182156]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:11 compute-0 sudo[182154]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:12 compute-0 sudo[182306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwsaisxioifnsavtdpnfbdqtrczvlgot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066192.1480784-1019-74354070537073/AnsiballZ_file.py'
Nov 25 10:23:12 compute-0 sudo[182306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:12 compute-0 python3.9[182308]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:12 compute-0 sudo[182306]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:13 compute-0 sudo[182458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehubpzajrkcdqoyexxkqzjndqbjiyxcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066192.739637-1019-216882708247147/AnsiballZ_file.py'
Nov 25 10:23:13 compute-0 sudo[182458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:13 compute-0 python3.9[182460]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:13 compute-0 sudo[182458]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:13 compute-0 sudo[182610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjiuwfxxczhhoazswkzlzmalesyyoecd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066193.352082-1019-66906775990015/AnsiballZ_file.py'
Nov 25 10:23:13 compute-0 sudo[182610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:13 compute-0 python3.9[182612]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:13 compute-0 sudo[182610]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:14 compute-0 sudo[182762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kribbmtewvvasnashcrawoukhtpughzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066193.9463124-1019-232397585619011/AnsiballZ_file.py'
Nov 25 10:23:14 compute-0 sudo[182762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:14 compute-0 python3.9[182764]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:14 compute-0 sudo[182762]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:14 compute-0 sudo[182914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwjsuyuatzxgmphllxqkktcsabbdkmgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066194.6506374-1019-22054035645091/AnsiballZ_file.py'
Nov 25 10:23:14 compute-0 sudo[182914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:15 compute-0 python3.9[182916]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:15 compute-0 sudo[182914]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:15 compute-0 podman[182941]: 2025-11-25 10:23:15.992392268 +0000 UTC m=+0.110773798 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:23:20 compute-0 sudo[183093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnoqbrjpwjyanxjsokbtwcmtruuhaqdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066199.8427238-1188-10627010915981/AnsiballZ_getent.py'
Nov 25 10:23:20 compute-0 sudo[183093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:20 compute-0 python3.9[183095]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 25 10:23:20 compute-0 sudo[183093]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:21 compute-0 sudo[183246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzyzjmhnioauouyyzuxvryxoeiphrdbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066200.62363-1196-231224113299760/AnsiballZ_group.py'
Nov 25 10:23:21 compute-0 sudo[183246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:21 compute-0 python3.9[183248]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:23:21 compute-0 groupadd[183249]: group added to /etc/group: name=nova, GID=42436
Nov 25 10:23:21 compute-0 groupadd[183249]: group added to /etc/gshadow: name=nova
Nov 25 10:23:21 compute-0 groupadd[183249]: new group: name=nova, GID=42436
Nov 25 10:23:21 compute-0 sudo[183246]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:22 compute-0 sudo[183414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbcrjapzzewcvtyrzyxkgaswydgpzurf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066201.4470377-1204-109990548549908/AnsiballZ_user.py'
Nov 25 10:23:22 compute-0 sudo[183414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:22 compute-0 podman[183378]: 2025-11-25 10:23:22.115621661 +0000 UTC m=+0.055156988 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:23:22 compute-0 python3.9[183420]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 10:23:22 compute-0 useradd[183425]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 25 10:23:22 compute-0 useradd[183425]: add 'nova' to group 'libvirt'
Nov 25 10:23:22 compute-0 useradd[183425]: add 'nova' to shadow group 'libvirt'
Nov 25 10:23:22 compute-0 sudo[183414]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:23 compute-0 sshd-session[183456]: Accepted publickey for zuul from 192.168.122.30 port 52770 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:23:23 compute-0 systemd-logind[822]: New session 25 of user zuul.
Nov 25 10:23:23 compute-0 systemd[1]: Started Session 25 of User zuul.
Nov 25 10:23:23 compute-0 sshd-session[183456]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:23:23 compute-0 sshd-session[183459]: Received disconnect from 192.168.122.30 port 52770:11: disconnected by user
Nov 25 10:23:23 compute-0 sshd-session[183459]: Disconnected from user zuul 192.168.122.30 port 52770
Nov 25 10:23:23 compute-0 sshd-session[183456]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:23:23 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Nov 25 10:23:23 compute-0 systemd-logind[822]: Session 25 logged out. Waiting for processes to exit.
Nov 25 10:23:23 compute-0 systemd-logind[822]: Removed session 25.
Nov 25 10:23:24 compute-0 python3.9[183609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:24 compute-0 python3.9[183730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066203.866514-1229-105502060582743/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:25 compute-0 python3.9[183880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:25 compute-0 python3.9[183956]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:26 compute-0 python3.9[184106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:27 compute-0 python3.9[184227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066206.2562027-1229-182324997195316/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:27 compute-0 python3.9[184377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:28 compute-0 python3.9[184498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066207.3600512-1229-142089860738093/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:28 compute-0 python3.9[184648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:29 compute-0 python3.9[184769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066208.4297569-1229-184378680155175/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:30 compute-0 python3.9[184919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:30 compute-0 python3.9[185040]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066209.580792-1229-261533542913786/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:31 compute-0 sudo[185190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythvrrwrjhcycgnaakamxjmqmhezfvjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066210.8844237-1312-274844937242539/AnsiballZ_file.py'
Nov 25 10:23:31 compute-0 sudo[185190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:31 compute-0 python3.9[185192]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:23:31 compute-0 sudo[185190]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:31 compute-0 sudo[185342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aakbyhxkfdsrmkghqcgsdsgpjumqetkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066211.5402296-1320-265042454083135/AnsiballZ_copy.py'
Nov 25 10:23:31 compute-0 sudo[185342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:31 compute-0 python3.9[185344]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:23:32 compute-0 sudo[185342]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:32 compute-0 sudo[185494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpxzlrogswcbtzhtwycajjpzhgbskgkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066212.1694689-1328-264994309841670/AnsiballZ_stat.py'
Nov 25 10:23:32 compute-0 sudo[185494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:32 compute-0 python3.9[185496]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:23:32 compute-0 sudo[185494]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:33 compute-0 sudo[185646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilqflfagvvibzowoasiynagrpgfpspgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066212.8163-1336-98503885417407/AnsiballZ_stat.py'
Nov 25 10:23:33 compute-0 sudo[185646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:33 compute-0 python3.9[185648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:33 compute-0 sudo[185646]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:33 compute-0 sudo[185769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elggvxomtagtfqamcsupahirgdujhtxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066212.8163-1336-98503885417407/AnsiballZ_copy.py'
Nov 25 10:23:33 compute-0 sudo[185769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:33 compute-0 python3.9[185771]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764066212.8163-1336-98503885417407/.source _original_basename=.ikpg34k4 follow=False checksum=a2ff2051e3ce2affb0bbcf984cce69316911766b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 25 10:23:33 compute-0 sudo[185769]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:34 compute-0 python3.9[185923]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:23:35 compute-0 python3.9[186075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:23:36.011 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:23:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:23:36.012 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:23:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:23:36.012 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:23:36 compute-0 python3.9[186196]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066214.7924428-1362-163986769465113/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:36 compute-0 python3.9[186346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:23:37 compute-0 python3.9[186467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066216.2048903-1377-235246138312233/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:23:37 compute-0 sudo[186617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fawccfaixkxuhwrkrsdyeitjywvyizro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066217.4710617-1394-257680775884705/AnsiballZ_container_config_data.py'
Nov 25 10:23:37 compute-0 sudo[186617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:37 compute-0 python3.9[186619]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 25 10:23:37 compute-0 sudo[186617]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:38 compute-0 sudo[186769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhcmlyuynpovorxjooortslwtrpgvdwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066218.1828947-1403-225840914959906/AnsiballZ_container_config_hash.py'
Nov 25 10:23:38 compute-0 sudo[186769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:38 compute-0 python3.9[186771]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:23:38 compute-0 sudo[186769]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:39 compute-0 sudo[186921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygvtemknohcnfftmgrijszyohynnlpdw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066218.8968816-1413-177663592593069/AnsiballZ_edpm_container_manage.py'
Nov 25 10:23:39 compute-0 sudo[186921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:39 compute-0 python3[186923]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:23:39 compute-0 podman[186958]: 2025-11-25 10:23:39.590250585 +0000 UTC m=+0.021024872 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 10:23:39 compute-0 podman[186958]: 2025-11-25 10:23:39.928036731 +0000 UTC m=+0.358810988 container create 99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 10:23:39 compute-0 python3[186923]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 25 10:23:40 compute-0 sudo[186921]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:40 compute-0 sudo[187156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akitjuvksiwlxzlgqtdekikbgztnwjmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066220.2368798-1421-219517441002733/AnsiballZ_stat.py'
Nov 25 10:23:40 compute-0 sudo[187156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:40 compute-0 podman[187119]: 2025-11-25 10:23:40.509331492 +0000 UTC m=+0.051327579 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:23:40 compute-0 python3.9[187164]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:23:40 compute-0 sudo[187156]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:41 compute-0 sudo[187316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwtcdxqsrkmszrfszapaprdhdhcarers ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066221.1183877-1433-149741620750577/AnsiballZ_container_config_data.py'
Nov 25 10:23:41 compute-0 sudo[187316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:41 compute-0 python3.9[187318]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 25 10:23:41 compute-0 sudo[187316]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:42 compute-0 sudo[187468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlwuvrgdzntozfqsrlezojsvyngbnrxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066221.8048782-1442-264199769250056/AnsiballZ_container_config_hash.py'
Nov 25 10:23:42 compute-0 sudo[187468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:42 compute-0 python3.9[187470]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:23:42 compute-0 sudo[187468]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:42 compute-0 sudo[187620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhibbkwafnpflcoauhnpgkkewxgdowpj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066222.6016896-1452-186919529740864/AnsiballZ_edpm_container_manage.py'
Nov 25 10:23:42 compute-0 sudo[187620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:43 compute-0 python3[187622]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:23:43 compute-0 podman[187660]: 2025-11-25 10:23:43.32617931 +0000 UTC m=+0.056698492 container create b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 10:23:43 compute-0 podman[187660]: 2025-11-25 10:23:43.292378604 +0000 UTC m=+0.022897806 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 25 10:23:43 compute-0 python3[187622]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 25 10:23:43 compute-0 sudo[187620]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:43 compute-0 sudo[187845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhkzaavdknbjnydcizzmsnxrnrirjubt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066223.604843-1460-110725455001847/AnsiballZ_stat.py'
Nov 25 10:23:43 compute-0 sudo[187845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:44 compute-0 python3.9[187847]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:23:44 compute-0 sudo[187845]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:44 compute-0 sudo[187999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owqxuuhkmqxhabpwiqxlylkrfbphmakb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066224.3700526-1469-237478673758297/AnsiballZ_file.py'
Nov 25 10:23:44 compute-0 sudo[187999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:44 compute-0 python3.9[188001]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:23:44 compute-0 sudo[187999]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:45 compute-0 sudo[188150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwihddfclgglokjngnjadzuhtooyfwct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066225.01803-1469-273343515491269/AnsiballZ_copy.py'
Nov 25 10:23:45 compute-0 sudo[188150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:45 compute-0 python3.9[188152]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066225.01803-1469-273343515491269/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:23:45 compute-0 sudo[188150]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:45 compute-0 sudo[188226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhvtmnizcpvhqjdufjvuddbabtwbbhee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066225.01803-1469-273343515491269/AnsiballZ_systemd.py'
Nov 25 10:23:45 compute-0 sudo[188226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:46 compute-0 python3.9[188228]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:23:46 compute-0 systemd[1]: Reloading.
Nov 25 10:23:46 compute-0 systemd-sysv-generator[188281]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:23:46 compute-0 systemd-rc-local-generator[188277]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:23:46 compute-0 podman[188230]: 2025-11-25 10:23:46.421047867 +0000 UTC m=+0.092152442 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:23:46 compute-0 sudo[188226]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:46 compute-0 sudo[188362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohbhssixisvkncmxltdkzsefrcymihoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066225.01803-1469-273343515491269/AnsiballZ_systemd.py'
Nov 25 10:23:46 compute-0 sudo[188362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:47 compute-0 python3.9[188364]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:23:47 compute-0 systemd[1]: Reloading.
Nov 25 10:23:47 compute-0 systemd-sysv-generator[188398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:23:47 compute-0 systemd-rc-local-generator[188393]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:23:47 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 10:23:47 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:47 compute-0 podman[188404]: 2025-11-25 10:23:47.718660012 +0000 UTC m=+0.093023517 container init b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251118, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:23:47 compute-0 podman[188404]: 2025-11-25 10:23:47.725744511 +0000 UTC m=+0.100107996 container start b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:23:47 compute-0 podman[188404]: nova_compute
Nov 25 10:23:47 compute-0 nova_compute[188419]: + sudo -E kolla_set_configs
Nov 25 10:23:47 compute-0 systemd[1]: Started nova_compute container.
Nov 25 10:23:47 compute-0 sudo[188362]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Validating config file
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying service configuration files
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Deleting /etc/ceph
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Creating directory /etc/ceph
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Writing out command to execute
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:47 compute-0 nova_compute[188419]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 10:23:47 compute-0 nova_compute[188419]: ++ cat /run_command
Nov 25 10:23:47 compute-0 nova_compute[188419]: + CMD=nova-compute
Nov 25 10:23:47 compute-0 nova_compute[188419]: + ARGS=
Nov 25 10:23:47 compute-0 nova_compute[188419]: + sudo kolla_copy_cacerts
Nov 25 10:23:47 compute-0 nova_compute[188419]: + [[ ! -n '' ]]
Nov 25 10:23:47 compute-0 nova_compute[188419]: + . kolla_extend_start
Nov 25 10:23:47 compute-0 nova_compute[188419]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 10:23:47 compute-0 nova_compute[188419]: Running command: 'nova-compute'
Nov 25 10:23:47 compute-0 nova_compute[188419]: + umask 0022
Nov 25 10:23:47 compute-0 nova_compute[188419]: + exec nova-compute
Nov 25 10:23:48 compute-0 python3.9[188580]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:23:49 compute-0 python3.9[188731]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.050 188423 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.050 188423 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.051 188423 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.051 188423 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.195 188423 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.208 188423 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.208 188423 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 10:23:50 compute-0 python3.9[188883]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.828 188423 INFO nova.virt.driver [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.937 188423 INFO nova.compute.provider_config [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.952 188423 DEBUG oslo_concurrency.lockutils [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.952 188423 DEBUG oslo_concurrency.lockutils [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.952 188423 DEBUG oslo_concurrency.lockutils [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.953 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.953 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.953 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.953 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.953 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.953 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.954 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.954 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.954 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.954 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.954 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.954 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.955 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.955 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.955 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.955 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.955 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.956 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.956 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.956 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.956 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.956 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.956 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.957 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.957 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.957 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.957 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.957 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.958 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.958 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.958 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.958 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.958 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.958 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.959 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.959 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.959 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.959 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.959 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.959 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.960 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.960 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.960 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.960 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.960 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.961 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.961 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.961 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.961 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.961 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.961 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.962 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.962 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.962 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.962 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.962 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.962 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.963 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.963 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.963 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.963 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.963 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.963 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.963 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.964 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.964 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.964 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.964 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.964 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.965 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.965 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.965 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.965 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.965 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.965 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.966 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.966 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.966 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.966 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.966 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.966 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.967 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.967 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.967 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.967 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.967 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.967 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.967 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.968 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.968 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.968 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.968 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.968 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.968 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.969 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.969 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.969 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.969 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.969 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.969 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.970 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.970 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.970 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.970 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.970 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.970 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.970 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.971 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.971 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.971 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.971 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.971 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.972 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.972 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.972 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.972 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.972 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.972 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.972 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.973 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.973 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.973 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.973 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.973 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.973 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.973 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.974 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.974 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.974 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.974 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.974 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.974 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.975 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.975 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.975 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.975 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.975 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.976 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.976 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.976 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.976 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.976 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.976 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.977 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.977 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.977 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.977 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.977 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.978 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.978 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.978 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.978 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.978 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.979 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.979 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.979 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.979 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.979 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.979 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.979 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.980 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.980 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.980 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.980 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.980 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.980 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.980 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.981 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.981 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.981 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.981 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.981 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.981 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.982 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.982 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.982 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.982 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.982 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.982 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.982 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.983 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.983 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.983 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.983 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.983 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.983 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.983 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.984 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.984 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.984 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.984 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.984 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.984 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.984 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.985 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.985 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.985 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.985 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.985 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.985 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.986 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.986 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.986 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.986 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.986 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.986 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.986 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.987 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.987 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.987 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.987 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.987 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.987 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.987 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.988 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.988 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.988 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.988 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.988 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.988 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.988 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.989 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.989 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.989 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.989 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.989 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.989 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.990 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.990 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.990 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.990 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.990 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.990 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.990 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.991 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.991 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.991 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.991 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.991 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.992 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.992 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.992 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.992 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.992 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.993 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.993 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.993 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.993 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.993 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.993 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.994 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.994 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.994 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.994 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.994 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.995 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.995 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.995 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.995 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.995 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.996 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.996 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.996 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.996 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.996 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.997 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.997 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.997 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.997 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.997 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.997 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.998 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.998 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.998 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.998 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.998 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.999 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.999 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.999 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.999 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:50 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.999 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:50.999 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.000 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.000 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.000 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.000 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.000 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.000 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.001 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.001 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.001 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.001 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.001 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.001 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.001 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.002 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.002 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.002 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.002 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.002 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.002 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.002 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.003 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.003 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.003 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.003 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.003 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.003 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.003 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.004 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.004 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.004 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.004 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.004 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.004 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.004 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.005 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.005 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.005 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.005 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.005 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.005 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.005 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.006 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.006 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.006 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.006 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.006 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.006 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.006 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.007 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.007 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.007 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.007 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.007 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.008 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.008 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.008 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.008 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.008 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.008 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.009 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.009 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.009 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.009 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.009 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.009 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.010 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.010 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.010 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.010 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.010 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.010 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.010 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.011 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.011 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.011 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.011 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.012 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.012 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.012 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.012 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.012 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.012 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.013 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.013 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.013 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.013 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.013 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.013 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.014 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.014 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.014 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.014 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.014 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.014 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.015 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.015 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.015 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.015 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.015 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.016 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.016 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.016 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.016 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.016 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.016 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.016 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.017 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.017 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.017 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.017 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.017 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.018 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.018 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.018 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.018 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.018 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.018 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.019 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.019 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.019 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.019 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.019 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.019 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.019 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.020 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.020 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.020 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.020 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.020 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.020 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.021 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.021 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.021 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.021 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.021 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.021 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.022 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.022 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.022 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.022 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.022 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.022 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.023 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.023 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.023 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.023 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.023 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.023 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.024 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.024 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.024 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.024 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.024 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.025 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.025 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.025 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.025 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.025 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.025 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.025 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.026 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.026 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.026 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.026 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.026 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.026 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.026 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.027 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.027 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.027 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.027 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.027 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.028 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.028 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.028 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.028 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.028 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.028 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.028 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.029 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.029 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.029 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.029 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.029 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.029 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.029 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.030 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.030 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.030 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.030 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.030 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.030 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.030 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.031 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.031 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.031 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.031 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.031 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.031 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.031 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.032 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.032 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.032 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.032 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.032 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.032 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.033 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.033 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.033 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.033 188423 WARNING oslo_config.cfg [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 10:23:51 compute-0 nova_compute[188419]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 10:23:51 compute-0 nova_compute[188419]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 10:23:51 compute-0 nova_compute[188419]: and ``live_migration_inbound_addr`` respectively.
Nov 25 10:23:51 compute-0 nova_compute[188419]: ).  Its value may be silently ignored in the future.
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.033 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.034 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.034 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.034 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.034 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.034 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.034 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.035 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.035 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.035 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.035 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.035 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.036 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.036 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.036 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.036 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.036 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.036 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.037 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.037 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.037 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.037 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.038 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.038 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.038 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.038 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.038 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.038 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.039 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.039 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.039 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.039 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.040 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.040 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.040 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.040 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.041 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.041 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.041 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.041 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.041 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.042 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.042 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.042 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.042 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.042 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.043 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.043 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.043 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.043 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.043 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.044 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.044 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.044 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.044 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.044 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.045 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.045 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.045 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.045 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.045 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.045 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.046 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.046 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.046 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.046 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.046 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.047 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.047 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.047 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.047 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.047 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.047 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.048 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.048 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.048 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.048 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.048 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.049 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.049 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.049 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.049 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.049 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.050 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.050 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.050 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.050 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.050 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.051 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.051 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.051 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.051 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.051 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.052 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.052 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.052 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.052 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.052 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.053 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.053 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.053 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.053 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.053 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.054 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.054 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.054 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.054 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.054 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.054 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.055 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.055 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.055 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.055 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.055 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.056 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.056 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.056 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.056 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.056 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.057 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.057 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.057 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.057 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.058 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.058 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.058 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.058 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.058 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.058 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.059 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.059 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.059 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.059 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.059 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.060 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.060 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.060 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.060 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.060 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.060 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.061 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.061 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.061 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.061 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.061 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.061 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.062 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.062 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.062 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.062 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.063 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.063 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.063 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.063 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.063 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.063 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.064 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.064 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.064 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.064 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.064 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.064 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.065 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.065 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.065 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.065 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.065 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.065 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.065 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.066 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.066 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.066 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.066 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.066 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.066 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.066 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.067 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.067 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.067 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.067 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.067 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.067 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.068 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.068 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.068 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.068 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.068 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.068 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.068 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.069 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.069 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.069 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.069 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.069 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.069 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.070 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.070 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.070 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.070 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.070 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.070 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.071 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.071 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.071 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.071 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.071 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.071 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.071 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.072 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.072 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.072 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.072 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.072 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.072 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.072 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.073 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.073 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.073 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.073 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.073 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.074 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.074 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.074 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.074 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.074 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.074 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.075 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.075 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.075 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.075 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.075 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.075 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.076 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.076 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.076 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.076 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.076 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.076 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.076 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.077 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.077 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.077 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.077 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.077 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.078 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.078 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.078 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.078 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.078 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.079 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.079 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.079 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.079 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.079 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.079 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.080 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.080 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.080 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.080 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.080 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.080 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.081 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.081 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.081 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.081 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.081 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.081 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.082 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.082 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.082 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.082 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.082 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.083 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.083 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.083 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.083 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.083 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.084 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.084 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.084 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.084 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.084 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.084 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.085 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.085 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.085 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.085 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.085 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.086 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.086 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.086 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.086 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.086 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.086 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.087 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.087 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.087 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.087 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.087 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.087 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.088 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.088 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.088 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.088 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.088 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.088 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.089 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.089 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.089 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.089 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.089 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.089 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.090 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.090 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.090 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.090 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.090 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.090 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.091 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.091 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.091 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.091 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.091 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.091 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.092 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.092 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.092 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.092 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.092 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.092 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.092 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.093 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.093 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.093 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.093 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.093 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.093 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.094 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.094 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.094 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.094 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.095 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.095 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.095 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.095 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.095 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.095 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.096 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.096 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.096 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.096 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.096 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.096 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.097 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.097 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.097 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.097 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.097 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.098 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.098 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.098 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.098 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.098 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.098 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.099 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.099 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.099 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.099 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.099 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.099 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.100 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.100 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.100 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.100 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.100 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.100 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.100 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.101 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.101 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.101 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.101 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.101 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.101 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.101 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.102 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.102 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.102 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.102 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.102 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.103 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.103 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.103 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.103 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.103 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.103 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.104 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.104 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.104 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.104 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.104 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.104 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.105 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.105 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.105 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.105 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.105 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.105 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.106 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.106 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.106 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.106 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.106 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.106 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.107 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.107 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.107 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.107 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.107 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.107 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.108 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.108 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.108 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.108 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.108 188423 DEBUG oslo_service.service [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.110 188423 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.121 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.122 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.123 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.123 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 10:23:51 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 25 10:23:51 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.195 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fa377f8f250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.199 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fa377f8f250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.199 188423 INFO nova.virt.libvirt.driver [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Connection event '1' reason 'None'
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.219 188423 WARNING nova.virt.libvirt.driver [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 10:23:51 compute-0 nova_compute[188419]: 2025-11-25 10:23:51.220 188423 DEBUG nova.virt.libvirt.volume.mount [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 10:23:51 compute-0 sudo[189077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bspfbfbzgmpeghsufwlnksxeqftbupgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066230.588206-1529-105026310505111/AnsiballZ_podman_container.py'
Nov 25 10:23:51 compute-0 sudo[189077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:51 compute-0 python3.9[189079]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 10:23:51 compute-0 sudo[189077]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:51 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:23:51 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.053 188423 INFO nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]: 
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <host>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <uuid>2c41005d-4220-44aa-a37c-4fdfb3e65238</uuid>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <arch>x86_64</arch>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model>EPYC-Rome-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <vendor>AMD</vendor>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <microcode version='16777317'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <signature family='23' model='49' stepping='0'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='x2apic'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='tsc-deadline'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='osxsave'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='hypervisor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='tsc_adjust'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='spec-ctrl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='stibp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='arch-capabilities'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='cmp_legacy'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='topoext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='virt-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='lbrv'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='tsc-scale'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='vmcb-clean'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='pause-filter'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='pfthreshold'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='svme-addr-chk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='rdctl-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='mds-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature name='pschange-mc-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <pages unit='KiB' size='4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <pages unit='KiB' size='2048'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <pages unit='KiB' size='1048576'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <power_management>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <suspend_mem/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <suspend_disk/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <suspend_hybrid/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </power_management>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <iommu support='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <migration_features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <live/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <uri_transports>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <uri_transport>tcp</uri_transport>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <uri_transport>rdma</uri_transport>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </uri_transports>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </migration_features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <topology>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <cells num='1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <cell id='0'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           <memory unit='KiB'>7864312</memory>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           <pages unit='KiB' size='4'>1966078</pages>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           <distances>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <sibling id='0' value='10'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           </distances>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           <cpus num='8'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:           </cpus>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         </cell>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </cells>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </topology>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <cache>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </cache>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <secmodel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model>selinux</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <doi>0</doi>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </secmodel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <secmodel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model>dac</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <doi>0</doi>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </secmodel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </host>
Nov 25 10:23:52 compute-0 nova_compute[188419]: 
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <guest>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <os_type>hvm</os_type>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <arch name='i686'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <wordsize>32</wordsize>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <domain type='qemu'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <domain type='kvm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </arch>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <pae/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <nonpae/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <acpi default='on' toggle='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <apic default='on' toggle='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <cpuselection/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <deviceboot/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <disksnapshot default='on' toggle='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <externalSnapshot/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </guest>
Nov 25 10:23:52 compute-0 nova_compute[188419]: 
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <guest>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <os_type>hvm</os_type>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <arch name='x86_64'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <wordsize>64</wordsize>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <domain type='qemu'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <domain type='kvm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </arch>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <acpi default='on' toggle='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <apic default='on' toggle='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <cpuselection/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <deviceboot/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <disksnapshot default='on' toggle='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <externalSnapshot/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </guest>
Nov 25 10:23:52 compute-0 nova_compute[188419]: 
Nov 25 10:23:52 compute-0 nova_compute[188419]: </capabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]: 
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.061 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.087 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 10:23:52 compute-0 nova_compute[188419]: <domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <domain>kvm</domain>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <arch>i686</arch>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <vcpu max='4096'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <iothreads supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <os supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='firmware'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <loader supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>rom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pflash</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='readonly'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>yes</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='secure'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </loader>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </os>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='maximumMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <vendor>AMD</vendor>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='succor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='custom' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-128'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-256'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-512'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <memoryBacking supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='sourceType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>anonymous</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>memfd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </memoryBacking>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <disk supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='diskDevice'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>disk</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cdrom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>floppy</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>lun</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>fdc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>sata</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </disk>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <graphics supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vnc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egl-headless</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </graphics>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <video supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='modelType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vga</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cirrus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>none</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>bochs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ramfb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </video>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hostdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='mode'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>subsystem</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='startupPolicy'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>mandatory</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>requisite</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>optional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='subsysType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pci</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='capsType'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='pciBackend'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hostdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <rng supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>random</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </rng>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <filesystem supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='driverType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>path</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>handle</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtiofs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </filesystem>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <tpm supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-tis</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-crb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emulator</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>external</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendVersion'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>2.0</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </tpm>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <redirdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </redirdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <channel supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </channel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <crypto supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </crypto>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <interface supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>passt</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </interface>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <panic supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>isa</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>hyperv</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </panic>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <console supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>null</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dev</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pipe</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stdio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>udp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tcp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu-vdagent</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </console>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <gic supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <genid supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backup supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <async-teardown supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <ps2 supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sev supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sgx supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hyperv supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='features'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>relaxed</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vapic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>spinlocks</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vpindex</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>runtime</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>synic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stimer</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reset</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vendor_id</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>frequencies</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reenlightenment</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tlbflush</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ipi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>avic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emsr_bitmap</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>xmm_input</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hyperv>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <launchSecurity supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='sectype'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tdx</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </launchSecurity>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </features>
Nov 25 10:23:52 compute-0 nova_compute[188419]: </domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.095 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 10:23:52 compute-0 nova_compute[188419]: <domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <domain>kvm</domain>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <arch>i686</arch>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <vcpu max='240'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <iothreads supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <os supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='firmware'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <loader supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>rom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pflash</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='readonly'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>yes</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='secure'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </loader>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </os>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='maximumMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <vendor>AMD</vendor>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='succor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='custom' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 sudo[189273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxetkxbwmrgytqmedxdxwokbqpnkbvau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066231.8253303-1537-240466976781953/AnsiballZ_systemd.py'
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 sudo[189273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-128'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-256'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-512'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <memoryBacking supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='sourceType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>anonymous</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>memfd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </memoryBacking>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <disk supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='diskDevice'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>disk</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cdrom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>floppy</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>lun</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ide</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>fdc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>sata</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </disk>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <graphics supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vnc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egl-headless</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </graphics>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <video supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='modelType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vga</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cirrus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>none</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>bochs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ramfb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </video>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hostdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='mode'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>subsystem</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='startupPolicy'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>mandatory</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>requisite</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>optional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='subsysType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pci</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='capsType'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='pciBackend'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hostdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <rng supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>random</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </rng>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <filesystem supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='driverType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>path</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>handle</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtiofs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </filesystem>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <tpm supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-tis</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-crb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emulator</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>external</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendVersion'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>2.0</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </tpm>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <redirdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </redirdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <channel supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </channel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <crypto supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </crypto>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <interface supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>passt</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </interface>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <panic supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>isa</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>hyperv</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </panic>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <console supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>null</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dev</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pipe</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stdio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>udp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tcp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu-vdagent</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </console>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <gic supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <genid supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backup supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <async-teardown supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <ps2 supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sev supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sgx supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hyperv supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='features'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>relaxed</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vapic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>spinlocks</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vpindex</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>runtime</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>synic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stimer</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reset</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vendor_id</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>frequencies</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reenlightenment</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tlbflush</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ipi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>avic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emsr_bitmap</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>xmm_input</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hyperv>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <launchSecurity supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='sectype'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tdx</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </launchSecurity>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </features>
Nov 25 10:23:52 compute-0 nova_compute[188419]: </domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.124 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.129 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 10:23:52 compute-0 nova_compute[188419]: <domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <domain>kvm</domain>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <arch>x86_64</arch>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <vcpu max='4096'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <iothreads supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <os supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='firmware'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>efi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <loader supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>rom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pflash</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='readonly'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>yes</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='secure'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>yes</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </loader>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </os>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='maximumMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <vendor>AMD</vendor>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='succor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='custom' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-128'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-256'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-512'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 podman[189275]: 2025-11-25 10:23:52.24700668 +0000 UTC m=+0.065730849 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <memoryBacking supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='sourceType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>anonymous</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>memfd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </memoryBacking>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <disk supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='diskDevice'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>disk</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cdrom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>floppy</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>lun</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>fdc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>sata</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </disk>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <graphics supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vnc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egl-headless</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </graphics>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <video supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='modelType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vga</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cirrus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>none</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>bochs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ramfb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </video>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hostdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='mode'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>subsystem</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='startupPolicy'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>mandatory</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>requisite</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>optional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='subsysType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pci</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='capsType'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='pciBackend'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hostdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <rng supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>random</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </rng>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <filesystem supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='driverType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>path</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>handle</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtiofs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </filesystem>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <tpm supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-tis</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-crb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emulator</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>external</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendVersion'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>2.0</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </tpm>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <redirdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </redirdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <channel supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </channel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <crypto supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </crypto>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <interface supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>passt</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </interface>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <panic supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>isa</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>hyperv</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </panic>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <console supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>null</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dev</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pipe</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stdio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>udp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tcp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu-vdagent</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </console>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <gic supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <genid supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backup supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <async-teardown supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <ps2 supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sev supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sgx supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hyperv supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='features'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>relaxed</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vapic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>spinlocks</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vpindex</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>runtime</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>synic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stimer</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reset</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vendor_id</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>frequencies</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reenlightenment</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tlbflush</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ipi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>avic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emsr_bitmap</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>xmm_input</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hyperv>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <launchSecurity supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='sectype'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tdx</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </launchSecurity>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </features>
Nov 25 10:23:52 compute-0 nova_compute[188419]: </domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.191 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 10:23:52 compute-0 nova_compute[188419]: <domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <domain>kvm</domain>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <arch>x86_64</arch>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <vcpu max='240'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <iothreads supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <os supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='firmware'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <loader supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>rom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pflash</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='readonly'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>yes</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='secure'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>no</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </loader>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </os>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='maximumMigratable'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>on</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>off</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <vendor>AMD</vendor>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='succor'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <mode name='custom' supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Denverton-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='auto-ibrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amd-psfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='stibp-always-on'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='EPYC-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-128'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-256'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx10-512'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='prefetchiti'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Haswell-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512er'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512pf'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fma4'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tbm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xop'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='amx-tile'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-bf16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-fp16'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bitalg'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrc'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fzrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='la57'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='taa-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xfd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ifma'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cmpccxadd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fbsdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='fsrs'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ibrs-all'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mcdt-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pbrsb-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='psdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='serialize'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vaes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='hle'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='rtm'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512bw'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512cd'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512dq'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512f'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='avx512vl'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='invpcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pcid'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='pku'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='mpx'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='core-capability'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='split-lock-detect'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='cldemote'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='erms'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='gfni'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdir64b'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='movdiri'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='xsaves'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='athlon-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='core2duo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='coreduo-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='n270-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='ss'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <blockers model='phenom-v1'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnow'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <feature name='3dnowext'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </blockers>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </mode>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </cpu>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <memoryBacking supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <enum name='sourceType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>anonymous</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <value>memfd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </memoryBacking>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <disk supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='diskDevice'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>disk</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cdrom</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>floppy</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>lun</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ide</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>fdc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>sata</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </disk>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <graphics supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vnc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egl-headless</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </graphics>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <video supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='modelType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vga</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>cirrus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>none</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>bochs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ramfb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </video>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hostdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='mode'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>subsystem</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='startupPolicy'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>mandatory</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>requisite</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>optional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='subsysType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pci</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>scsi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='capsType'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='pciBackend'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hostdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <rng supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtio-non-transitional</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>random</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>egd</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </rng>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <filesystem supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='driverType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>path</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>handle</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>virtiofs</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </filesystem>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <tpm supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-tis</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tpm-crb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emulator</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>external</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendVersion'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>2.0</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </tpm>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <redirdev supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='bus'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>usb</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </redirdev>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <channel supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </channel>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <crypto supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendModel'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>builtin</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </crypto>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <interface supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='backendType'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>default</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>passt</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </interface>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <panic supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='model'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>isa</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>hyperv</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </panic>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <console supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='type'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>null</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vc</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pty</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dev</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>file</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>pipe</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stdio</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>udp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tcp</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>unix</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>qemu-vdagent</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>dbus</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </console>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </devices>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   <features>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <gic supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <genid supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <backup supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <async-teardown supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <ps2 supported='yes'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sev supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <sgx supported='no'/>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <hyperv supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='features'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>relaxed</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vapic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>spinlocks</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vpindex</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>runtime</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>synic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>stimer</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reset</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>vendor_id</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>frequencies</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>reenlightenment</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tlbflush</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>ipi</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>avic</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>emsr_bitmap</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>xmm_input</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </defaults>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </hyperv>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     <launchSecurity supported='yes'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       <enum name='sectype'>
Nov 25 10:23:52 compute-0 nova_compute[188419]:         <value>tdx</value>
Nov 25 10:23:52 compute-0 nova_compute[188419]:       </enum>
Nov 25 10:23:52 compute-0 nova_compute[188419]:     </launchSecurity>
Nov 25 10:23:52 compute-0 nova_compute[188419]:   </features>
Nov 25 10:23:52 compute-0 nova_compute[188419]: </domainCapabilities>
Nov 25 10:23:52 compute-0 nova_compute[188419]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.259 188423 DEBUG nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.259 188423 INFO nova.virt.libvirt.host [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Secure Boot support detected
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.262 188423 INFO nova.virt.libvirt.driver [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.262 188423 INFO nova.virt.libvirt.driver [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.271 188423 DEBUG nova.virt.libvirt.driver [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.320 188423 INFO nova.virt.node [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Determined node identity a660730c-fa97-4a71-acf8-b1f3eef924ba from /var/lib/nova/compute_id
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.349 188423 WARNING nova.compute.manager [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Compute nodes ['a660730c-fa97-4a71-acf8-b1f3eef924ba'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.395 188423 INFO nova.compute.manager [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 10:23:52 compute-0 python3.9[189276]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:23:52 compute-0 systemd[1]: Stopping nova_compute container...
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.578 188423 WARNING nova.compute.manager [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.578 188423 DEBUG oslo_concurrency.lockutils [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.579 188423 DEBUG oslo_concurrency.lockutils [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.579 188423 DEBUG oslo_concurrency.lockutils [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.579 188423 DEBUG nova.compute.resource_tracker [None req-1858b2aa-a814-474e-b4b4-f2597fbf73fc - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:23:52 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.631 188423 DEBUG oslo_concurrency.lockutils [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.631 188423 DEBUG oslo_concurrency.lockutils [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:23:52 compute-0 nova_compute[188419]: 2025-11-25 10:23:52.632 188423 DEBUG oslo_concurrency.lockutils [None req-7c2494ca-eb47-476c-9b9a-66a48a88cf1f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:23:52 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 25 10:23:53 compute-0 virtqemud[189024]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 25 10:23:53 compute-0 virtqemud[189024]: hostname: compute-0
Nov 25 10:23:53 compute-0 virtqemud[189024]: End of file while reading data: Input/output error
Nov 25 10:23:53 compute-0 systemd[1]: libpod-b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e.scope: Deactivated successfully.
Nov 25 10:23:53 compute-0 systemd[1]: libpod-b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e.scope: Consumed 3.359s CPU time.
Nov 25 10:23:53 compute-0 podman[189300]: 2025-11-25 10:23:53.162870989 +0000 UTC m=+0.597744313 container died b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible)
Nov 25 10:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e-userdata-shm.mount: Deactivated successfully.
Nov 25 10:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d-merged.mount: Deactivated successfully.
Nov 25 10:23:53 compute-0 podman[189300]: 2025-11-25 10:23:53.388414923 +0000 UTC m=+0.823288247 container cleanup b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 25 10:23:53 compute-0 podman[189300]: nova_compute
Nov 25 10:23:53 compute-0 podman[189351]: nova_compute
Nov 25 10:23:53 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 25 10:23:53 compute-0 systemd[1]: Stopped nova_compute container.
Nov 25 10:23:53 compute-0 systemd[1]: Starting nova_compute container...
Nov 25 10:23:53 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f457754b47a6b49b5e6bc63e19816397af381c4878e96c1b849b735dde55b42d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:53 compute-0 podman[189364]: 2025-11-25 10:23:53.624829552 +0000 UTC m=+0.155893596 container init b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=nova_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 10:23:53 compute-0 podman[189364]: 2025-11-25 10:23:53.630390598 +0000 UTC m=+0.161454632 container start b94fff5918ee73e80502f077aaccaa9883b877ec202cd73a20e4256e533a635e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 10:23:53 compute-0 nova_compute[189381]: + sudo -E kolla_set_configs
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Validating config file
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying service configuration files
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /etc/ceph
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Creating directory /etc/ceph
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /etc/ceph
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Writing out command to execute
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:53 compute-0 nova_compute[189381]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 25 10:23:53 compute-0 nova_compute[189381]: ++ cat /run_command
Nov 25 10:23:53 compute-0 nova_compute[189381]: + CMD=nova-compute
Nov 25 10:23:53 compute-0 nova_compute[189381]: + ARGS=
Nov 25 10:23:53 compute-0 nova_compute[189381]: + sudo kolla_copy_cacerts
Nov 25 10:23:53 compute-0 podman[189364]: nova_compute
Nov 25 10:23:53 compute-0 nova_compute[189381]: + [[ ! -n '' ]]
Nov 25 10:23:53 compute-0 nova_compute[189381]: + . kolla_extend_start
Nov 25 10:23:53 compute-0 nova_compute[189381]: Running command: 'nova-compute'
Nov 25 10:23:53 compute-0 nova_compute[189381]: + echo 'Running command: '\''nova-compute'\'''
Nov 25 10:23:53 compute-0 nova_compute[189381]: + umask 0022
Nov 25 10:23:53 compute-0 nova_compute[189381]: + exec nova-compute
Nov 25 10:23:53 compute-0 systemd[1]: Started nova_compute container.
Nov 25 10:23:53 compute-0 sudo[189273]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:54 compute-0 sudo[189542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzrzqrixkeubzydsxlcymbsjmqiegrms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066233.9924173-1546-221576322174901/AnsiballZ_podman_container.py'
Nov 25 10:23:54 compute-0 sudo[189542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:23:54 compute-0 python3.9[189544]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 25 10:23:54 compute-0 systemd[1]: Started libpod-conmon-99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7.scope.
Nov 25 10:23:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ceade1b4c0a555ee0bab5f65f49cd4df459aaf13723e471b9b3634dabb1e988/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ceade1b4c0a555ee0bab5f65f49cd4df459aaf13723e471b9b3634dabb1e988/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ceade1b4c0a555ee0bab5f65f49cd4df459aaf13723e471b9b3634dabb1e988/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 25 10:23:54 compute-0 podman[189569]: 2025-11-25 10:23:54.863748135 +0000 UTC m=+0.149327251 container init 99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:23:54 compute-0 podman[189569]: 2025-11-25 10:23:54.870854015 +0000 UTC m=+0.156433111 container start 99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251118)
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Applying nova statedir ownership
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 25 10:23:54 compute-0 nova_compute_init[189591]: INFO:nova_statedir:Nova statedir ownership complete
Nov 25 10:23:54 compute-0 systemd[1]: libpod-99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7.scope: Deactivated successfully.
Nov 25 10:23:55 compute-0 python3.9[189544]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 25 10:23:55 compute-0 podman[189592]: 2025-11-25 10:23:55.086639894 +0000 UTC m=+0.153332623 container died 99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm)
Nov 25 10:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7-userdata-shm.mount: Deactivated successfully.
Nov 25 10:23:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ceade1b4c0a555ee0bab5f65f49cd4df459aaf13723e471b9b3634dabb1e988-merged.mount: Deactivated successfully.
Nov 25 10:23:55 compute-0 podman[189592]: 2025-11-25 10:23:55.529784977 +0000 UTC m=+0.596477686 container cleanup 99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:23:55 compute-0 systemd[1]: libpod-conmon-99931d32d3f7123a93ebece4a2bfc0e3273a5663b353bd800d5d4b04cd738ab7.scope: Deactivated successfully.
Nov 25 10:23:55 compute-0 sudo[189542]: pam_unix(sudo:session): session closed for user root
Nov 25 10:23:55 compute-0 nova_compute[189381]: 2025-11-25 10:23:55.840 189385 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 10:23:55 compute-0 nova_compute[189381]: 2025-11-25 10:23:55.841 189385 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 10:23:55 compute-0 nova_compute[189381]: 2025-11-25 10:23:55.841 189385 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 25 10:23:55 compute-0 nova_compute[189381]: 2025-11-25 10:23:55.842 189385 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 25 10:23:55 compute-0 nova_compute[189381]: 2025-11-25 10:23:55.985 189385 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.009 189385 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.009 189385 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 25 10:23:56 compute-0 sshd-session[161254]: Connection closed by 192.168.122.30 port 48506
Nov 25 10:23:56 compute-0 sshd-session[161251]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:23:56 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Nov 25 10:23:56 compute-0 systemd[1]: session-24.scope: Consumed 1min 49.483s CPU time.
Nov 25 10:23:56 compute-0 systemd-logind[822]: Session 24 logged out. Waiting for processes to exit.
Nov 25 10:23:56 compute-0 systemd-logind[822]: Removed session 24.
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.469 189385 INFO nova.virt.driver [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.571 189385 INFO nova.compute.provider_config [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.591 189385 DEBUG oslo_concurrency.lockutils [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.591 189385 DEBUG oslo_concurrency.lockutils [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.592 189385 DEBUG oslo_concurrency.lockutils [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.592 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.592 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.592 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.592 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.593 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.593 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.593 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.593 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.593 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.593 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.594 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.594 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.594 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.594 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.594 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.594 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.595 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.595 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.595 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.595 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.595 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.596 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.596 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.596 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.596 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.596 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.597 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.597 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.597 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.597 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.597 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.598 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.598 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.598 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.598 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.598 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.598 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.599 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.599 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.599 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.599 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.600 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.600 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.600 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.600 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.600 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.600 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.601 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.601 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.601 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.601 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.601 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.601 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.602 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.602 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.602 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.602 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.602 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.602 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.602 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.603 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.603 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.603 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.603 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.603 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.603 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.604 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.604 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.604 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.604 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.604 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.604 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.605 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.605 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.605 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.605 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.605 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.605 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.606 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.606 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.606 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.606 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.606 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.606 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.607 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.607 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.607 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.607 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.607 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.607 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.608 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.608 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.608 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.608 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.608 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.609 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.609 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.609 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.609 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.609 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.609 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.610 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.610 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.610 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.610 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.610 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.611 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.611 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.611 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.611 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.611 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.611 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.612 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.612 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.612 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.612 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.612 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.613 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.613 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.613 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.613 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.613 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.613 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.613 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.614 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.614 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.614 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.614 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.614 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.615 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.615 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.615 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.615 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.615 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.616 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.616 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.616 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.616 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.616 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.616 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.617 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.617 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.617 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.617 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.617 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.617 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.618 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.618 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.618 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.618 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.618 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.619 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.619 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.619 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.619 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.619 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.619 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.619 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.620 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.621 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.621 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.621 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.621 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.621 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.621 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.621 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.622 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.622 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.622 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.622 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.622 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.622 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.622 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.623 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.623 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.623 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.623 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.623 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.623 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.623 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.624 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.625 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.626 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.626 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.626 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.626 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.626 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.626 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.626 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.627 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.628 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.628 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.628 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.628 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.628 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.628 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.628 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.629 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.630 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.630 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.630 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.630 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.630 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.630 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.630 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.631 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.632 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.633 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.633 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.633 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.633 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.633 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.633 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.633 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.634 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.635 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.636 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.636 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.636 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.636 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.636 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.636 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.636 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.637 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.638 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.638 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.638 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.638 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.638 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.638 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.638 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.639 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.639 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.639 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.639 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.639 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.639 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.639 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.640 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.641 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.642 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.642 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.642 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.642 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.642 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.642 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.642 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.643 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.644 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.644 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.644 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.644 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.644 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.644 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.644 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.645 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.646 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.646 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.646 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.646 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.646 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.646 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.647 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.648 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.649 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.649 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.649 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.649 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.649 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.649 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.650 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.651 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.651 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.651 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.651 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.651 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.651 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.652 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.652 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.652 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.652 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.652 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.652 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.652 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.653 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.654 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.655 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.655 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.655 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.655 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.655 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.655 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.655 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.656 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.656 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.656 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.656 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.656 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.656 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.656 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.657 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.658 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.658 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.658 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.658 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.658 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.658 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.658 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.659 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.659 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.659 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.659 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.659 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.659 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.659 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.660 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.661 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.661 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.661 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.661 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.661 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.661 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.661 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.662 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.663 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.664 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.664 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.664 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.664 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.664 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.664 189385 WARNING oslo_config.cfg [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 25 10:23:56 compute-0 nova_compute[189381]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 25 10:23:56 compute-0 nova_compute[189381]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 25 10:23:56 compute-0 nova_compute[189381]: and ``live_migration_inbound_addr`` respectively.
Nov 25 10:23:56 compute-0 nova_compute[189381]: ).  Its value may be silently ignored in the future.
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.665 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.665 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.665 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.665 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.665 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.665 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.666 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.667 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.667 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.667 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.667 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.667 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.667 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.667 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.668 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.668 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.668 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.668 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.668 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.668 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.668 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.669 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.669 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.669 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.669 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.669 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.669 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.669 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.670 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.671 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.671 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.671 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.671 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.671 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.671 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.671 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.672 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.673 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.674 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.674 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.674 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.674 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.674 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.674 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.674 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.675 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.676 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.676 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.676 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.676 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.676 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.676 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.676 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.677 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.678 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.679 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.679 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.679 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.679 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.679 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.679 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.679 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.680 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.681 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.681 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.681 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.681 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.681 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.681 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.681 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.682 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.682 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.682 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.682 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.682 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.682 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.682 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.683 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.683 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.683 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.683 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.683 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.683 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.684 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.684 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.684 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.684 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.684 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.684 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.684 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.685 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.685 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.685 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.685 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.685 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.685 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.685 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.686 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.686 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.686 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.686 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.686 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.686 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.686 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.687 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.687 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.687 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.687 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.687 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.687 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.687 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.688 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.688 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.688 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.688 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.688 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.688 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.688 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.689 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.689 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.689 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.689 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.689 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.689 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.689 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.690 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.690 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.690 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.690 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.690 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.690 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.690 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.691 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.691 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.691 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.691 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.691 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.691 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.691 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.692 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.692 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.692 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.692 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.692 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.692 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.693 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.693 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.693 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.693 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.693 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.693 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.694 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.694 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.694 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.694 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.694 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.694 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.695 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.696 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.696 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.696 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.696 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.696 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.696 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.696 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.697 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.698 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.699 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.699 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.699 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.699 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.699 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.699 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.700 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.700 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.700 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.700 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.700 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.700 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.700 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.701 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.702 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.702 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.702 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.702 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.702 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.702 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.702 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.703 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.703 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.703 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.703 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.703 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.703 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.703 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.704 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.705 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.705 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.705 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.705 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.705 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.705 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.705 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.706 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.707 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.707 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.707 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.707 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.707 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.707 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.707 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.708 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.708 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.708 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.708 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.708 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.708 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.708 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.709 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.710 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.710 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.710 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.710 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.710 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.710 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.710 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.711 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.712 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.712 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.712 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.712 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.712 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.712 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.712 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.713 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.713 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.713 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.713 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.713 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.713 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.713 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.714 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.714 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.714 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.714 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.714 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.714 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.714 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.715 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.716 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.716 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.716 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.716 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.717 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.717 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.717 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.717 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.717 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.717 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.717 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.718 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.718 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.718 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.718 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.718 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.718 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.719 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.719 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.719 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.719 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.719 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.719 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.720 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.720 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.720 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.720 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.720 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.721 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.721 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.721 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.722 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.722 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.722 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.723 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.723 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.723 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.724 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.724 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.724 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.725 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.725 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.725 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.726 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.726 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.726 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.726 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.727 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.727 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.727 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.728 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.728 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.728 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.729 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.729 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.729 189385 DEBUG oslo_service.service [None req-eb3b7c03-c980-45b3-83ae-7582eb504cde - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.731 189385 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.747 189385 INFO nova.virt.node [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Determined node identity a660730c-fa97-4a71-acf8-b1f3eef924ba from /var/lib/nova/compute_id
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.748 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.749 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.750 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.750 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.766 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4f3b2855e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.773 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4f3b2855e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.775 189385 INFO nova.virt.libvirt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Connection event '1' reason 'None'
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.782 189385 INFO nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Libvirt host capabilities <capabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]: 
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <host>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <uuid>2c41005d-4220-44aa-a37c-4fdfb3e65238</uuid>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <arch>x86_64</arch>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model>EPYC-Rome-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <vendor>AMD</vendor>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <microcode version='16777317'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <signature family='23' model='49' stepping='0'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='x2apic'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='tsc-deadline'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='osxsave'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='hypervisor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='tsc_adjust'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='spec-ctrl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='stibp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='arch-capabilities'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='cmp_legacy'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='topoext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='virt-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='lbrv'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='tsc-scale'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='vmcb-clean'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='pause-filter'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='pfthreshold'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='svme-addr-chk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='rdctl-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='skip-l1dfl-vmentry'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='mds-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature name='pschange-mc-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <pages unit='KiB' size='4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <pages unit='KiB' size='2048'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <pages unit='KiB' size='1048576'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <power_management>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <suspend_mem/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <suspend_disk/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <suspend_hybrid/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </power_management>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <iommu support='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <migration_features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <live/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <uri_transports>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <uri_transport>tcp</uri_transport>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <uri_transport>rdma</uri_transport>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </uri_transports>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </migration_features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <topology>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <cells num='1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <cell id='0'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           <memory unit='KiB'>7864312</memory>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           <pages unit='KiB' size='4'>1966078</pages>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           <pages unit='KiB' size='2048'>0</pages>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           <distances>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <sibling id='0' value='10'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           </distances>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           <cpus num='8'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:           </cpus>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         </cell>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </cells>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </topology>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <cache>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </cache>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <secmodel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model>selinux</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <doi>0</doi>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </secmodel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <secmodel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model>dac</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <doi>0</doi>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </secmodel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </host>
Nov 25 10:23:56 compute-0 nova_compute[189381]: 
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <guest>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <os_type>hvm</os_type>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <arch name='i686'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <wordsize>32</wordsize>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <domain type='qemu'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <domain type='kvm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </arch>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <pae/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <nonpae/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <acpi default='on' toggle='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <apic default='on' toggle='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <cpuselection/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <deviceboot/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <disksnapshot default='on' toggle='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <externalSnapshot/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </guest>
Nov 25 10:23:56 compute-0 nova_compute[189381]: 
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <guest>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <os_type>hvm</os_type>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <arch name='x86_64'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <wordsize>64</wordsize>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <domain type='qemu'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <domain type='kvm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </arch>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <acpi default='on' toggle='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <apic default='on' toggle='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <cpuselection/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <deviceboot/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <disksnapshot default='on' toggle='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <externalSnapshot/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </guest>
Nov 25 10:23:56 compute-0 nova_compute[189381]: 
Nov 25 10:23:56 compute-0 nova_compute[189381]: </capabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]: 
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.791 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.793 189385 DEBUG nova.virt.libvirt.volume.mount [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.797 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 25 10:23:56 compute-0 nova_compute[189381]: <domainCapabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <domain>kvm</domain>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <arch>i686</arch>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <vcpu max='4096'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <iothreads supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <os supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <enum name='firmware'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <loader supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>rom</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pflash</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='readonly'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>yes</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='secure'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </loader>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </os>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='maximumMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <vendor>AMD</vendor>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='succor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='custom' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-128'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-256'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-512'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='KnightsMill'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SierraForest'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='athlon'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='athlon-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='core2duo'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='core2duo-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='coreduo'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='coreduo-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='n270'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='n270-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='phenom'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='phenom-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <memoryBacking supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <enum name='sourceType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>file</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>anonymous</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>memfd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </memoryBacking>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <disk supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='diskDevice'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>disk</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>cdrom</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>floppy</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>lun</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>fdc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>sata</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <graphics supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vnc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>egl-headless</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </graphics>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <video supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='modelType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vga</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>cirrus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>none</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>bochs</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>ramfb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </video>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <hostdev supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='mode'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>subsystem</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='startupPolicy'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>mandatory</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>requisite</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>optional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='subsysType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pci</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='capsType'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='pciBackend'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </hostdev>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <rng supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>random</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>egd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <filesystem supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='driverType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>path</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>handle</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtiofs</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </filesystem>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <tpm supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tpm-tis</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tpm-crb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>emulator</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>external</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendVersion'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>2.0</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </tpm>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <redirdev supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </redirdev>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <channel supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </channel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <crypto supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>qemu</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </crypto>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <interface supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>passt</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <panic supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>isa</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>hyperv</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </panic>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <console supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>null</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dev</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>file</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pipe</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>stdio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>udp</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tcp</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>qemu-vdagent</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </console>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <gic supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <genid supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <backup supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <async-teardown supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <ps2 supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <sev supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <sgx supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <hyperv supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='features'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>relaxed</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vapic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>spinlocks</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vpindex</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>runtime</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>synic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>stimer</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>reset</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vendor_id</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>frequencies</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>reenlightenment</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tlbflush</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>ipi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>avic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>emsr_bitmap</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>xmm_input</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <defaults>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </defaults>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </hyperv>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <launchSecurity supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='sectype'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tdx</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </launchSecurity>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </features>
Nov 25 10:23:56 compute-0 nova_compute[189381]: </domainCapabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.802 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 25 10:23:56 compute-0 nova_compute[189381]: <domainCapabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <domain>kvm</domain>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <arch>i686</arch>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <vcpu max='240'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <iothreads supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <os supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <enum name='firmware'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <loader supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>rom</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pflash</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='readonly'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>yes</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='secure'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </loader>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </os>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='maximumMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <vendor>AMD</vendor>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='succor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='custom' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-128'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-256'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-512'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='KnightsMill'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SierraForest'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='athlon'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='athlon-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='core2duo'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='core2duo-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='coreduo'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='coreduo-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='n270'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='n270-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='phenom'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='phenom-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <memoryBacking supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <enum name='sourceType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>file</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>anonymous</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>memfd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </memoryBacking>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <disk supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='diskDevice'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>disk</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>cdrom</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>floppy</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>lun</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>ide</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>fdc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>sata</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <graphics supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vnc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>egl-headless</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </graphics>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <video supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='modelType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vga</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>cirrus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>none</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>bochs</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>ramfb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </video>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <hostdev supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='mode'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>subsystem</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='startupPolicy'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>mandatory</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>requisite</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>optional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='subsysType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pci</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='capsType'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='pciBackend'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </hostdev>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <rng supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>random</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>egd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <filesystem supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='driverType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>path</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>handle</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtiofs</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </filesystem>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <tpm supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tpm-tis</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tpm-crb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>emulator</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>external</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendVersion'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>2.0</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </tpm>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <redirdev supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </redirdev>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <channel supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </channel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <crypto supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>qemu</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </crypto>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <interface supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>passt</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <panic supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>isa</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>hyperv</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </panic>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <console supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>null</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dev</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>file</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pipe</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>stdio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>udp</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tcp</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>qemu-vdagent</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </console>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <gic supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <genid supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <backup supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <async-teardown supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <ps2 supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <sev supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <sgx supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <hyperv supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='features'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>relaxed</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vapic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>spinlocks</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vpindex</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>runtime</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>synic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>stimer</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>reset</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vendor_id</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>frequencies</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>reenlightenment</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tlbflush</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>ipi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>avic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>emsr_bitmap</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>xmm_input</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <defaults>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </defaults>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </hyperv>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <launchSecurity supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='sectype'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tdx</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </launchSecurity>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </features>
Nov 25 10:23:56 compute-0 nova_compute[189381]: </domainCapabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.834 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.838 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 25 10:23:56 compute-0 nova_compute[189381]: <domainCapabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <domain>kvm</domain>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <arch>x86_64</arch>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <vcpu max='4096'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <iothreads supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <os supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <enum name='firmware'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>efi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <loader supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>rom</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pflash</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='readonly'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>yes</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='secure'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>yes</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </loader>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </os>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='maximumMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <vendor>AMD</vendor>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='succor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='custom' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-128'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-256'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-512'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Haswell-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='KnightsMill'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SierraForest'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='athlon'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='athlon-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='core2duo'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='core2duo-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='coreduo'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='coreduo-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='n270'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='n270-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='phenom'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='phenom-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <memoryBacking supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <enum name='sourceType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>file</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>anonymous</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>memfd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </memoryBacking>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <disk supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='diskDevice'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>disk</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>cdrom</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>floppy</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>lun</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>fdc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>sata</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <graphics supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vnc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>egl-headless</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </graphics>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <video supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='modelType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vga</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>cirrus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>none</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>bochs</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>ramfb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </video>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <hostdev supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='mode'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>subsystem</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='startupPolicy'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>mandatory</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>requisite</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>optional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='subsysType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pci</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='capsType'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='pciBackend'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </hostdev>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <rng supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>random</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>egd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <filesystem supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='driverType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>path</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>handle</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>virtiofs</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </filesystem>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <tpm supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tpm-tis</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tpm-crb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>emulator</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>external</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendVersion'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>2.0</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </tpm>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <redirdev supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </redirdev>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <channel supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </channel>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <crypto supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>qemu</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </crypto>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <interface supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='backendType'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>passt</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <panic supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>isa</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>hyperv</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </panic>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <console supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>null</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vc</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dev</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>file</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pipe</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>stdio</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>udp</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tcp</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>qemu-vdagent</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </console>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <features>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <gic supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <genid supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <backup supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <async-teardown supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <ps2 supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <sev supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <sgx supported='no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <hyperv supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='features'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>relaxed</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vapic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>spinlocks</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vpindex</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>runtime</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>synic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>stimer</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>reset</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>vendor_id</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>frequencies</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>reenlightenment</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tlbflush</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>ipi</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>avic</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>emsr_bitmap</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>xmm_input</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <defaults>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </defaults>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </hyperv>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <launchSecurity supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='sectype'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>tdx</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </launchSecurity>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </features>
Nov 25 10:23:56 compute-0 nova_compute[189381]: </domainCapabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:56 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.902 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 25 10:23:56 compute-0 nova_compute[189381]: <domainCapabilities>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <path>/usr/libexec/qemu-kvm</path>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <domain>kvm</domain>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <arch>x86_64</arch>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <vcpu max='240'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <iothreads supported='yes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <os supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <enum name='firmware'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <loader supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>rom</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>pflash</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='readonly'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>yes</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='secure'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>no</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </loader>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   </os>
Nov 25 10:23:56 compute-0 nova_compute[189381]:   <cpu>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-passthrough' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='hostPassthroughMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='maximum' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <enum name='maximumMigratable'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>on</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <value>off</value>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='host-model' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <vendor>AMD</vendor>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='x2apic'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-deadline'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='hypervisor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc_adjust'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='spec-ctrl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='stibp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='cmp_legacy'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='overflow-recov'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='succor'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='amd-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='virt-ssbd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lbrv'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='tsc-scale'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='vmcb-clean'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='flushbyasid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pause-filter'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='pfthreshold'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='svme-addr-chk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <feature policy='disable' name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:56 compute-0 nova_compute[189381]:     <mode name='custom' supported='yes'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Broadwell-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cascadelake-Server-v5'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Cooperlake-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Denverton-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='Dhyana-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Genoa-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='auto-ibrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Milan-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amd-psfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='no-nested-data-bp'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='null-sel-clr-base'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='stibp-always-on'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-Rome-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v3'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='EPYC-v4'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v1'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 25 10:23:56 compute-0 nova_compute[189381]:       <blockers model='GraniteRapids-v2'>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-128'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-256'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx10-512'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:56 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='prefetchiti'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell-IBRS'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell-v2'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell-v3'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Haswell-v4'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-noTSX'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v2'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v3'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v4'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v5'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v6'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Icelake-Server-v7'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='IvyBridge'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-IBRS'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='IvyBridge-v2'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='KnightsMill'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='KnightsMill-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-4fmaps'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-4vnniw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512er'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512pf'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Opteron_G4-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Opteron_G5-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fma4'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='tbm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xop'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v2'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='SapphireRapids-v3'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-int8'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='amx-tile'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-bf16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-fp16'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512-vpopcntdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bitalg'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vbmi2'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrc'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fzrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='la57'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='taa-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='tsx-ldtrk'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xfd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='SierraForest'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='SierraForest-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-ifma'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-ne-convert'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx-vnni-int8'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='bus-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cmpccxadd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fbsdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='fsrs'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ibrs-all'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='mcdt-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pbrsb-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='psdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='sbdr-ssdp-no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='serialize'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vaes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='vpclmulqdq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-IBRS'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v2'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v3'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Client-v4'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-IBRS'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v2'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='hle'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='rtm'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v3'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v4'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Skylake-Server-v5'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512bw'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512cd'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512dq'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512f'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='avx512vl'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='invpcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pcid'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='pku'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Snowridge'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='mpx'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v2'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v3'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='core-capability'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='split-lock-detect'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='Snowridge-v4'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='cldemote'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='erms'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='gfni'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdir64b'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='movdiri'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='xsaves'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='athlon'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='athlon-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='core2duo'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='core2duo-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='coreduo'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='coreduo-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='n270'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='n270-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='ss'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='phenom'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <blockers model='phenom-v1'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnow'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <feature name='3dnowext'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </blockers>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </mode>
Nov 25 10:23:57 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:23:57 compute-0 nova_compute[189381]:   <memoryBacking supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <enum name='sourceType'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <value>file</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <value>anonymous</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <value>memfd</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:   </memoryBacking>
Nov 25 10:23:57 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <disk supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='diskDevice'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>disk</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>cdrom</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>floppy</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>lun</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>ide</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>fdc</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>sata</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <graphics supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>vnc</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>egl-headless</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </graphics>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <video supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='modelType'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>vga</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>cirrus</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>none</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>bochs</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>ramfb</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </video>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <hostdev supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='mode'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>subsystem</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='startupPolicy'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>mandatory</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>requisite</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>optional</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='subsysType'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>pci</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>scsi</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='capsType'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='pciBackend'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </hostdev>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <rng supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio-transitional</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtio-non-transitional</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>random</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>egd</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <filesystem supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='driverType'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>path</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>handle</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>virtiofs</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </filesystem>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <tpm supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>tpm-tis</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>tpm-crb</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>emulator</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>external</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='backendVersion'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>2.0</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </tpm>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <redirdev supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='bus'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>usb</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </redirdev>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <channel supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </channel>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <crypto supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='model'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>qemu</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='backendModel'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>builtin</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </crypto>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <interface supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='backendType'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>default</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>passt</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <panic supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='model'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>isa</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>hyperv</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </panic>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <console supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='type'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>null</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>vc</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>pty</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>dev</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>file</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>pipe</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>stdio</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>udp</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>tcp</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>unix</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>qemu-vdagent</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>dbus</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </console>
Nov 25 10:23:57 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:23:57 compute-0 nova_compute[189381]:   <features>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <gic supported='no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <vmcoreinfo supported='yes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <genid supported='yes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <backingStoreInput supported='yes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <backup supported='yes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <async-teardown supported='yes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <ps2 supported='yes'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <sev supported='no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <sgx supported='no'/>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <hyperv supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='features'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>relaxed</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>vapic</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>spinlocks</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>vpindex</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>runtime</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>synic</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>stimer</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>reset</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>vendor_id</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>frequencies</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>reenlightenment</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>tlbflush</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>ipi</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>avic</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>emsr_bitmap</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>xmm_input</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <defaults>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <spinlocks>4095</spinlocks>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <stimer_direct>on</stimer_direct>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <tlbflush_direct>on</tlbflush_direct>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <tlbflush_extended>on</tlbflush_extended>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </defaults>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </hyperv>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     <launchSecurity supported='yes'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       <enum name='sectype'>
Nov 25 10:23:57 compute-0 nova_compute[189381]:         <value>tdx</value>
Nov 25 10:23:57 compute-0 nova_compute[189381]:       </enum>
Nov 25 10:23:57 compute-0 nova_compute[189381]:     </launchSecurity>
Nov 25 10:23:57 compute-0 nova_compute[189381]:   </features>
Nov 25 10:23:57 compute-0 nova_compute[189381]: </domainCapabilities>
Nov 25 10:23:57 compute-0 nova_compute[189381]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.967 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.967 189385 INFO nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Secure Boot support detected
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.969 189385 INFO nova.virt.libvirt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.969 189385 INFO nova.virt.libvirt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.979 189385 DEBUG nova.virt.libvirt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:56.996 189385 INFO nova.virt.node [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Determined node identity a660730c-fa97-4a71-acf8-b1f3eef924ba from /var/lib/nova/compute_id
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.011 189385 WARNING nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Compute nodes ['a660730c-fa97-4a71-acf8-b1f3eef924ba'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.040 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.054 189385 WARNING nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.054 189385 DEBUG oslo_concurrency.lockutils [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.054 189385 DEBUG oslo_concurrency.lockutils [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.055 189385 DEBUG oslo_concurrency.lockutils [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.055 189385 DEBUG nova.compute.resource_tracker [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.231 189385 WARNING nova.virt.libvirt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.232 189385 DEBUG nova.compute.resource_tracker [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6037MB free_disk=72.43182754516602GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.233 189385 DEBUG oslo_concurrency.lockutils [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.233 189385 DEBUG oslo_concurrency.lockutils [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.347 189385 WARNING nova.compute.resource_tracker [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] No compute node record for compute-0.ctlplane.example.com:a660730c-fa97-4a71-acf8-b1f3eef924ba: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host a660730c-fa97-4a71-acf8-b1f3eef924ba could not be found.
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.404 189385 INFO nova.compute.resource_tracker [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: a660730c-fa97-4a71-acf8-b1f3eef924ba
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.465 189385 DEBUG nova.compute.resource_tracker [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:23:57 compute-0 nova_compute[189381]: 2025-11-25 10:23:57.465 189385 DEBUG nova.compute.resource_tracker [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.430 189385 INFO nova.scheduler.client.report [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [req-a18e38db-775c-4df6-9ad6-d4b69f48e03e] Created resource provider record via placement API for resource provider with UUID a660730c-fa97-4a71-acf8-b1f3eef924ba and name compute-0.ctlplane.example.com.
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.914 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 25 10:23:58 compute-0 nova_compute[189381]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.914 189385 INFO nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] kernel doesn't support AMD SEV
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.915 189385 DEBUG nova.compute.provider_tree [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.915 189385 DEBUG nova.virt.libvirt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.968 189385 DEBUG nova.scheduler.client.report [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Updated inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.968 189385 DEBUG nova.compute.provider_tree [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Updating resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 10:23:58 compute-0 nova_compute[189381]: 2025-11-25 10:23:58.968 189385 DEBUG nova.compute.provider_tree [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:23:59 compute-0 nova_compute[189381]: 2025-11-25 10:23:59.114 189385 DEBUG nova.compute.provider_tree [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Updating resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 10:23:59 compute-0 nova_compute[189381]: 2025-11-25 10:23:59.141 189385 DEBUG nova.compute.resource_tracker [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:23:59 compute-0 nova_compute[189381]: 2025-11-25 10:23:59.142 189385 DEBUG oslo_concurrency.lockutils [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:23:59 compute-0 nova_compute[189381]: 2025-11-25 10:23:59.142 189385 DEBUG nova.service [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 25 10:23:59 compute-0 nova_compute[189381]: 2025-11-25 10:23:59.190 189385 DEBUG nova.service [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 25 10:23:59 compute-0 nova_compute[189381]: 2025-11-25 10:23:59.190 189385 DEBUG nova.servicegroup.drivers.db [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 25 10:24:02 compute-0 sshd-session[189675]: Accepted publickey for zuul from 192.168.122.30 port 51786 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:24:02 compute-0 systemd-logind[822]: New session 26 of user zuul.
Nov 25 10:24:02 compute-0 systemd[1]: Started Session 26 of User zuul.
Nov 25 10:24:02 compute-0 sshd-session[189675]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:24:03 compute-0 python3.9[189828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:24:05 compute-0 sudo[189982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cepnzknhfognabioxzwjpcyctntpgryp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066244.5251713-36-279638078817751/AnsiballZ_systemd_service.py'
Nov 25 10:24:05 compute-0 sudo[189982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:05 compute-0 python3.9[189984]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:24:05 compute-0 systemd[1]: Reloading.
Nov 25 10:24:05 compute-0 systemd-rc-local-generator[190012]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:24:05 compute-0 systemd-sysv-generator[190016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:24:05 compute-0 sudo[189982]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:07 compute-0 python3.9[190170]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:24:07 compute-0 network[190187]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:24:07 compute-0 network[190188]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:24:07 compute-0 network[190189]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:24:10 compute-0 podman[190295]: 2025-11-25 10:24:10.957750594 +0000 UTC m=+0.074095166 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:24:11 compute-0 sudo[190478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuunfesimxuxnoaxkriaduofyzdbqubj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066251.6737792-55-206925017166261/AnsiballZ_systemd_service.py'
Nov 25 10:24:11 compute-0 sudo[190478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:12 compute-0 python3.9[190480]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:24:12 compute-0 sudo[190478]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:12 compute-0 sudo[190631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dihhwmxicimqtlljbueiwwlmssaidkha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066252.5464423-65-255557482113791/AnsiballZ_file.py'
Nov 25 10:24:12 compute-0 sudo[190631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:13 compute-0 python3.9[190633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:13 compute-0 sudo[190631]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:13 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:24:13 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:24:13 compute-0 rsyslogd[1010]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:24:13 compute-0 sudo[190784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqiivmlamknvpbfhmymolboahstuazss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066253.3569884-73-185101391712972/AnsiballZ_file.py'
Nov 25 10:24:13 compute-0 sudo[190784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:13 compute-0 python3.9[190786]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:13 compute-0 sudo[190784]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:14 compute-0 sudo[190936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzjlitdhspovbqqowejlldzbwtzxqfcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066254.01184-82-11083812017529/AnsiballZ_command.py'
Nov 25 10:24:14 compute-0 sudo[190936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:14 compute-0 python3.9[190938]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:14 compute-0 sudo[190936]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:15 compute-0 python3.9[191090]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:24:15 compute-0 sudo[191240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awkkxvgxdfqxlempcwzytzizxkfpezmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066255.6344552-100-38419474804308/AnsiballZ_systemd_service.py'
Nov 25 10:24:15 compute-0 sudo[191240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:16 compute-0 python3.9[191242]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:24:16 compute-0 systemd[1]: Reloading.
Nov 25 10:24:16 compute-0 systemd-sysv-generator[191272]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:24:16 compute-0 systemd-rc-local-generator[191268]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:24:16 compute-0 sudo[191240]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:16 compute-0 sudo[191441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acvezosnacrmbceahurihfduhicbilue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066256.6314678-108-221949909227864/AnsiballZ_command.py'
Nov 25 10:24:16 compute-0 sudo[191441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:16 compute-0 podman[191401]: 2025-11-25 10:24:16.988418045 +0000 UTC m=+0.115017706 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 10:24:17 compute-0 python3.9[191449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:24:17 compute-0 sudo[191441]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:17 compute-0 sudo[191607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epynziiiihvtusuxofnkbzoyratkffcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066257.463837-117-160999193609448/AnsiballZ_file.py'
Nov 25 10:24:17 compute-0 sudo[191607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:17 compute-0 python3.9[191609]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:24:17 compute-0 sudo[191607]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:18 compute-0 python3.9[191759]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:24:19 compute-0 python3.9[191911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:19 compute-0 python3.9[192032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066258.8500116-133-192540629307456/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:24:20 compute-0 sudo[192182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yomcxmjdtqocjcptnadypkypftzyprce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066260.2208896-148-241718960400923/AnsiballZ_group.py'
Nov 25 10:24:20 compute-0 sudo[192182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:20 compute-0 python3.9[192184]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 25 10:24:20 compute-0 sudo[192182]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:21 compute-0 sudo[192334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nraixbgkryfajpxwpyxmwbxzrrtsdenv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066261.1547384-159-187334620192206/AnsiballZ_getent.py'
Nov 25 10:24:21 compute-0 sudo[192334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:21 compute-0 python3.9[192336]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 25 10:24:21 compute-0 sudo[192334]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:22 compute-0 sudo[192487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmcmkcngloekbxwrpwrxbmdpekkywbuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066261.9313831-167-252120114293057/AnsiballZ_group.py'
Nov 25 10:24:22 compute-0 sudo[192487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:22 compute-0 python3.9[192489]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 25 10:24:22 compute-0 groupadd[192491]: group added to /etc/group: name=ceilometer, GID=42405
Nov 25 10:24:22 compute-0 groupadd[192491]: group added to /etc/gshadow: name=ceilometer
Nov 25 10:24:22 compute-0 groupadd[192491]: new group: name=ceilometer, GID=42405
Nov 25 10:24:22 compute-0 podman[192490]: 2025-11-25 10:24:22.465505576 +0000 UTC m=+0.056783418 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 10:24:22 compute-0 sudo[192487]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:23 compute-0 sudo[192665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwaqneycrlfjicxnxbabfkrzcrkgtwzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066262.6229348-175-16261859864347/AnsiballZ_user.py'
Nov 25 10:24:23 compute-0 sudo[192665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:23 compute-0 python3.9[192667]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 25 10:24:23 compute-0 useradd[192669]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Nov 25 10:24:23 compute-0 useradd[192669]: add 'ceilometer' to group 'libvirt'
Nov 25 10:24:23 compute-0 useradd[192669]: add 'ceilometer' to shadow group 'libvirt'
Nov 25 10:24:24 compute-0 sudo[192665]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:25 compute-0 python3.9[192825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:26 compute-0 python3.9[192946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066265.2243683-201-76587811962220/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:26 compute-0 python3.9[193096]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:27 compute-0 python3.9[193217]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066266.3549385-201-114779051935957/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:27 compute-0 python3.9[193367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:28 compute-0 python3.9[193488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066267.513419-201-261276853232611/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:29 compute-0 python3.9[193638]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:24:29 compute-0 python3.9[193790]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:24:30 compute-0 python3.9[193942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:30 compute-0 python3.9[194063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066269.9106452-260-197820590569945/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:31 compute-0 python3.9[194213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:31 compute-0 python3.9[194289]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:32 compute-0 python3.9[194439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:32 compute-0 python3.9[194560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066272.003822-260-144855193776641/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:33 compute-0 nova_compute[189381]: 2025-11-25 10:24:33.192 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:33 compute-0 nova_compute[189381]: 2025-11-25 10:24:33.207 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:33 compute-0 python3.9[194710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:34 compute-0 python3.9[194831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066273.1368802-260-68848013717498/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:34 compute-0 python3.9[194981]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:35 compute-0 python3.9[195102]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066274.2682307-260-31767612334218/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:35 compute-0 python3.9[195252]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:24:36.012 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:24:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:24:36.013 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:24:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:24:36.013 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:24:36 compute-0 python3.9[195373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066275.3382726-260-222583696112508/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:36 compute-0 python3.9[195523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:37 compute-0 python3.9[195644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066276.4485373-260-22199711182895/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:37 compute-0 python3.9[195794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:38 compute-0 python3.9[195915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066277.5004885-260-32436917535085/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:39 compute-0 python3.9[196065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:39 compute-0 python3.9[196186]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066278.738187-260-140830540188521/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:40 compute-0 python3.9[196336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:40 compute-0 python3.9[196457]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066279.8422937-260-168898264683997/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:41 compute-0 podman[196581]: 2025-11-25 10:24:41.272380314 +0000 UTC m=+0.050310296 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:24:41 compute-0 python3.9[196616]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:42 compute-0 python3.9[196747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066281.0031593-260-265429873838576/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:42 compute-0 python3.9[196897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:43 compute-0 python3.9[196973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:43 compute-0 python3.9[197123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:44 compute-0 python3.9[197199]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:44 compute-0 python3.9[197349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:45 compute-0 python3.9[197425]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:45 compute-0 sudo[197575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jecywywbmvawefawkfbvgwphzhkglibo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066285.425937-449-228881037635962/AnsiballZ_file.py'
Nov 25 10:24:45 compute-0 sudo[197575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:45 compute-0 python3.9[197577]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:45 compute-0 sudo[197575]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:46 compute-0 sudo[197727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngcxjjkswdqjrhpbypwsvrqmomaepyxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066286.0356646-457-175597551085686/AnsiballZ_file.py'
Nov 25 10:24:46 compute-0 sudo[197727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:46 compute-0 python3.9[197729]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:46 compute-0 sudo[197727]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:46 compute-0 sudo[197879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaqvuapqclyuciwxhodqmgpyvcewrnuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066286.706653-465-81244265742790/AnsiballZ_file.py'
Nov 25 10:24:46 compute-0 sudo[197879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:47 compute-0 python3.9[197881]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:24:47 compute-0 sudo[197879]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:47 compute-0 podman[197882]: 2025-11-25 10:24:47.299449204 +0000 UTC m=+0.097096202 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 10:24:47 compute-0 sudo[198057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyjaivycjolbbigipdpszpnqmphhcjwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066287.3671281-473-150323809296896/AnsiballZ_systemd_service.py'
Nov 25 10:24:47 compute-0 sudo[198057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:47 compute-0 python3.9[198059]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:24:48 compute-0 systemd[1]: Reloading.
Nov 25 10:24:48 compute-0 systemd-rc-local-generator[198087]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:24:48 compute-0 systemd-sysv-generator[198090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:24:48 compute-0 systemd[1]: Listening on Podman API Socket.
Nov 25 10:24:48 compute-0 sudo[198057]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:48 compute-0 sudo[198248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rneeizkprttyrpvzrdafdqjgiybkiczy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066288.669285-482-214451174939399/AnsiballZ_stat.py'
Nov 25 10:24:48 compute-0 sudo[198248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:49 compute-0 python3.9[198250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:49 compute-0 sudo[198248]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:49 compute-0 sudo[198371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqziyqizjfipaaszxwqiyohocgxzdafn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066288.669285-482-214451174939399/AnsiballZ_copy.py'
Nov 25 10:24:49 compute-0 sudo[198371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:49 compute-0 python3.9[198373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066288.669285-482-214451174939399/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:24:49 compute-0 sudo[198371]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:50 compute-0 sudo[198447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwbdhugkfmacgsczopeneglvfvheagzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066288.669285-482-214451174939399/AnsiballZ_stat.py'
Nov 25 10:24:50 compute-0 sudo[198447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:50 compute-0 python3.9[198449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:24:50 compute-0 sudo[198447]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:50 compute-0 sudo[198570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtwsvjlnbnqcfhxwekpkhnqiwemycxfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066288.669285-482-214451174939399/AnsiballZ_copy.py'
Nov 25 10:24:50 compute-0 sudo[198570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:50 compute-0 python3.9[198572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066288.669285-482-214451174939399/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:24:50 compute-0 sudo[198570]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:51 compute-0 sudo[198722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvkhxnkytuvkqqrkmwevlchpkauqdyzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066291.1632047-510-225063000030582/AnsiballZ_container_config_data.py'
Nov 25 10:24:51 compute-0 sudo[198722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:51 compute-0 python3.9[198724]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 25 10:24:51 compute-0 sudo[198722]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:52 compute-0 sudo[198874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtfpksteplqduubygymezuyyphnfqqhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066292.066893-519-198256655262921/AnsiballZ_container_config_hash.py'
Nov 25 10:24:52 compute-0 sudo[198874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:52 compute-0 podman[198876]: 2025-11-25 10:24:52.624609391 +0000 UTC m=+0.074451208 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:24:52 compute-0 python3.9[198877]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:24:52 compute-0 sudo[198874]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:53 compute-0 sudo[199047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwiryuxdgimaiedhbrvvknzfhvvfafvz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066293.0743868-529-74608729271914/AnsiballZ_edpm_container_manage.py'
Nov 25 10:24:53 compute-0 sudo[199047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:53 compute-0 python3[199049]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:24:54 compute-0 podman[199085]: 2025-11-25 10:24:54.02498641 +0000 UTC m=+0.022139117 image pull 62d0cdbd80511c7b16dc1b12830c26126f29d8961a194546e50bdb4d0a16aab7 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 25 10:24:54 compute-0 podman[199085]: 2025-11-25 10:24:54.169102158 +0000 UTC m=+0.166254845 container create 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 10:24:54 compute-0 python3[199049]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Nov 25 10:24:54 compute-0 sudo[199047]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:54 compute-0 sudo[199275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnocuudbusitlmjjcpsrdppicobkukdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066294.4689248-537-62919672632270/AnsiballZ_stat.py'
Nov 25 10:24:54 compute-0 sudo[199275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:54 compute-0 python3.9[199277]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:24:54 compute-0 sudo[199275]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:55 compute-0 sudo[199429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvalzdojjduneaebzmqqikbwmqrucptj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066295.1722906-546-7219902259334/AnsiballZ_file.py'
Nov 25 10:24:55 compute-0 sudo[199429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:55 compute-0 python3.9[199431]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:55 compute-0 sudo[199429]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.024 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.037 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.037 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.037 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.038 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.038 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.038 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.038 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.038 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.038 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.065 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.066 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.066 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.066 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:24:56 compute-0 sudo[199580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwqjlcacnbhmkafjdwaoyazpwiznmuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066295.7253625-546-133743691907249/AnsiballZ_copy.py'
Nov 25 10:24:56 compute-0 sudo[199580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.273 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.274 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5988MB free_disk=72.43135070800781GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.275 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.275 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:24:56 compute-0 python3.9[199582]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066295.7253625-546-133743691907249/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.386 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.386 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:24:56 compute-0 sudo[199580]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.410 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.427 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.428 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:24:56 compute-0 nova_compute[189381]: 2025-11-25 10:24:56.428 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:24:56 compute-0 sudo[199656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spthsnqzogrngnravlhnlnsojhhxachj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066295.7253625-546-133743691907249/AnsiballZ_systemd.py'
Nov 25 10:24:56 compute-0 sudo[199656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:57 compute-0 python3.9[199658]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:24:57 compute-0 systemd[1]: Reloading.
Nov 25 10:24:57 compute-0 systemd-sysv-generator[199687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:24:57 compute-0 systemd-rc-local-generator[199684]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:24:57 compute-0 sudo[199656]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:57 compute-0 sudo[199766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtlqdmvzfooaalhxqgxauzhbgfwqfoju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066295.7253625-546-133743691907249/AnsiballZ_systemd.py'
Nov 25 10:24:57 compute-0 sudo[199766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:24:58 compute-0 python3.9[199768]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:24:58 compute-0 systemd[1]: Reloading.
Nov 25 10:24:58 compute-0 systemd-sysv-generator[199800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:24:58 compute-0 systemd-rc-local-generator[199796]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:24:58 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Nov 25 10:24:58 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 10:24:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 10:24:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.
Nov 25 10:24:59 compute-0 podman[199807]: 2025-11-25 10:24:59.240181388 +0000 UTC m=+0.615948895 container init 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4)
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + sudo -E kolla_set_configs
Nov 25 10:24:59 compute-0 sudo[199828]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: sudo: unable to send audit message: Operation not permitted
Nov 25 10:24:59 compute-0 sudo[199828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:24:59 compute-0 podman[199807]: 2025-11-25 10:24:59.275383409 +0000 UTC m=+0.651150896 container start 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Validating config file
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Copying service configuration files
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: INFO:__main__:Writing out command to execute
Nov 25 10:24:59 compute-0 sudo[199828]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: ++ cat /run_command
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + ARGS=
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + sudo kolla_copy_cacerts
Nov 25 10:24:59 compute-0 sudo[199844]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: sudo: unable to send audit message: Operation not permitted
Nov 25 10:24:59 compute-0 sudo[199844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:24:59 compute-0 sudo[199844]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + [[ ! -n '' ]]
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + . kolla_extend_start
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + umask 0022
Nov 25 10:24:59 compute-0 ceilometer_agent_compute[199822]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 25 10:24:59 compute-0 podman[199807]: ceilometer_agent_compute
Nov 25 10:24:59 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Nov 25 10:24:59 compute-0 sudo[199766]: pam_unix(sudo:session): session closed for user root
Nov 25 10:24:59 compute-0 podman[199829]: 2025-11-25 10:24:59.696285734 +0000 UTC m=+0.412022191 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 10:24:59 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-16247c110c65e2b.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:24:59 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-16247c110c65e2b.service: Failed with result 'exit-code'.
Nov 25 10:25:00 compute-0 sudo[200003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmryxzussshockzrbvceryiizvxmzasf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066299.81312-570-86432085459846/AnsiballZ_systemd.py'
Nov 25 10:25:00 compute-0 sudo[200003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.217 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.218 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.219 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.220 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.221 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.222 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.223 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.224 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.230 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.231 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.231 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.253 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.253 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.254 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.255 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.256 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.257 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.258 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.259 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.260 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.263 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.264 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.264 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.266 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.267 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.268 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 25 10:25:00 compute-0 python3.9[200005]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.492 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.502 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.504 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.505 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 10:25:00 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.620 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.639 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.639 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.639 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.639 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.639 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.640 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.640 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.640 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.640 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.640 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.640 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.640 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.641 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.642 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.642 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.642 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.643 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.643 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.643 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.643 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.643 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.644 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.645 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.646 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.647 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.648 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.649 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.650 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.651 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.652 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.653 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.654 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.655 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.656 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.657 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.657 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.659 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.673 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.674 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.674 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f15ae5f77d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.676 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.676 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.676 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f5880>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.676 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f78c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.676 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.676 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.676 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f41d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f5a00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15b189da60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f4260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f5a90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f5ac0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f5af0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15b1be4b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7c80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.677 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f4500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.678 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f56a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.678 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.678 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f76e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.678 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f7740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.678 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f5790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.678 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f77a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.678 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f15ae5f57f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f15ae1963f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.680 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f15ae5f7830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.680 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f15ae5f5820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.680 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f15ae5f7890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f15ae5f78f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f15ae5f7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f15ae5f7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f15ae5f79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f15ae5f58e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f15ae5f61e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f15ae5f4230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f15ae5f5a60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f15ae5f5670>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.682 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f15ae5f5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f15af88f440>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f15ae5f7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f15b221b050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f15ae5f7c50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f15ae5f44d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f15ae5f56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f15ae5f6210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f15ae5f7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f15ae5f7710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f15ae5f57c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f15ae5f7770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.684 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f15ae5f5b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f15af777b30>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.722 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.722 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.722 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.722 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 25 10:25:00 compute-0 ceilometer_agent_compute[199822]: 2025-11-25 10:25:00.731 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 25 10:25:00 compute-0 virtqemud[189024]: End of file while reading data: Input/output error
Nov 25 10:25:00 compute-0 virtqemud[189024]: End of file while reading data: Input/output error
Nov 25 10:25:00 compute-0 systemd[1]: libpod-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope: Deactivated successfully.
Nov 25 10:25:00 compute-0 systemd[1]: libpod-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope: Consumed 1.682s CPU time.
Nov 25 10:25:00 compute-0 podman[200017]: 2025-11-25 10:25:00.984890063 +0000 UTC m=+0.452312018 container died 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:25:01 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-16247c110c65e2b.timer: Deactivated successfully.
Nov 25 10:25:01 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.
Nov 25 10:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-userdata-shm.mount: Deactivated successfully.
Nov 25 10:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c-merged.mount: Deactivated successfully.
Nov 25 10:25:01 compute-0 podman[200017]: 2025-11-25 10:25:01.752840782 +0000 UTC m=+1.220262737 container cleanup 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:25:01 compute-0 podman[200017]: ceilometer_agent_compute
Nov 25 10:25:01 compute-0 podman[200053]: ceilometer_agent_compute
Nov 25 10:25:01 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 25 10:25:01 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 25 10:25:01 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Nov 25 10:25:01 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e68414d86e84b5076360b3c3f7557907f2b5af73cd6cc29d8cde63ace0e54d6c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:02 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.
Nov 25 10:25:02 compute-0 podman[200066]: 2025-11-25 10:25:02.080930733 +0000 UTC m=+0.235398440 container init 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + sudo -E kolla_set_configs
Nov 25 10:25:02 compute-0 sudo[200087]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: sudo: unable to send audit message: Operation not permitted
Nov 25 10:25:02 compute-0 sudo[200087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:25:02 compute-0 podman[200066]: 2025-11-25 10:25:02.108481224 +0000 UTC m=+0.262948911 container start 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm)
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Validating config file
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Copying service configuration files
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: INFO:__main__:Writing out command to execute
Nov 25 10:25:02 compute-0 sudo[200087]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: ++ cat /run_command
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + ARGS=
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + sudo kolla_copy_cacerts
Nov 25 10:25:02 compute-0 sudo[200101]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: sudo: unable to send audit message: Operation not permitted
Nov 25 10:25:02 compute-0 sudo[200101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:25:02 compute-0 sudo[200101]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + [[ ! -n '' ]]
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + . kolla_extend_start
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + umask 0022
Nov 25 10:25:02 compute-0 ceilometer_agent_compute[200081]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 25 10:25:02 compute-0 podman[200066]: ceilometer_agent_compute
Nov 25 10:25:02 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Nov 25 10:25:02 compute-0 podman[200088]: 2025-11-25 10:25:02.398556832 +0000 UTC m=+0.279441924 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:25:02 compute-0 sudo[200003]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:02 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-6b823eef448fbc51.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:25:02 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-6b823eef448fbc51.service: Failed with result 'exit-code'.
Nov 25 10:25:02 compute-0 sudo[200259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzdenkltkhqkyyqxlyjxjitonlcvmrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066302.5451283-578-189055713838590/AnsiballZ_stat.py'
Nov 25 10:25:02 compute-0 sudo[200259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:03 compute-0 python3.9[200261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:25:03 compute-0 sudo[200259]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.091 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.091 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.092 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.093 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.094 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.095 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.096 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.097 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.098 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.099 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.100 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.101 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.102 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.103 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.104 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.105 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.105 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.105 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.105 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.124 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.124 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.124 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.124 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.124 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.125 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.125 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.125 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.125 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.125 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.125 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.125 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.126 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.127 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.128 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.129 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.130 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.131 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.132 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.133 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.134 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.136 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.138 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.138 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.146 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.152 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.153 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.153 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.293 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.293 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.293 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.293 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.293 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.293 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.293 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.294 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.295 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.295 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.295 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.295 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.295 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.295 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.295 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.296 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.297 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.298 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.299 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.300 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.301 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.302 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.303 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.304 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.305 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.306 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.307 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.310 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.324 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.325 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.326 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:25:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:25:03 compute-0 sudo[200395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgretpeieuejrmqoajsdotsanfxiliol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066302.5451283-578-189055713838590/AnsiballZ_copy.py'
Nov 25 10:25:03 compute-0 sudo[200395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:03 compute-0 python3.9[200397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066302.5451283-578-189055713838590/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:25:03 compute-0 sudo[200395]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:04 compute-0 sudo[200547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aitncdqqaclzjolzkdyarsuywvwuiyza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066303.9361563-595-145342313306848/AnsiballZ_container_config_data.py'
Nov 25 10:25:04 compute-0 sudo[200547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:04 compute-0 python3.9[200549]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 25 10:25:04 compute-0 sudo[200547]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:05 compute-0 sudo[200699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqzirjxycusttwfzzgaabqipcdpuxjyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066304.7774947-604-80292771567088/AnsiballZ_container_config_hash.py'
Nov 25 10:25:05 compute-0 sudo[200699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:05 compute-0 python3.9[200701]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:25:05 compute-0 sudo[200699]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:05 compute-0 sudo[200851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frunugxhlssybxkuijvvvmzucqlqdjsp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066305.6270635-614-275591954270041/AnsiballZ_edpm_container_manage.py'
Nov 25 10:25:05 compute-0 sudo[200851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:06 compute-0 python3[200853]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:25:06 compute-0 podman[200888]: 2025-11-25 10:25:06.487695269 +0000 UTC m=+0.021593901 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 25 10:25:06 compute-0 podman[200888]: 2025-11-25 10:25:06.671846386 +0000 UTC m=+0.205744998 container create 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:25:06 compute-0 python3[200853]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 25 10:25:06 compute-0 sudo[200851]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:07 compute-0 sudo[201075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iphpayfzwtldpkfasatpxeprbkcowffb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066306.9788487-622-169878031528487/AnsiballZ_stat.py'
Nov 25 10:25:07 compute-0 sudo[201075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:07 compute-0 python3.9[201077]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:25:07 compute-0 sudo[201075]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:08 compute-0 sudo[201229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztmrigeyayimsinfeuodrllvltkmbagj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066307.7643492-631-251426680782384/AnsiballZ_file.py'
Nov 25 10:25:08 compute-0 sudo[201229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:08 compute-0 python3.9[201231]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:08 compute-0 sudo[201229]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:09 compute-0 sudo[201380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgwtzvouyjcvgubgkfsjkctytyvgwzjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066308.3522234-631-189768376520268/AnsiballZ_copy.py'
Nov 25 10:25:09 compute-0 sudo[201380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:09 compute-0 python3.9[201382]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066308.3522234-631-189768376520268/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:09 compute-0 sudo[201380]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:09 compute-0 auditd[700]: Audit daemon rotating log files
Nov 25 10:25:09 compute-0 sudo[201456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvphkvzjexqvjvsbpvqtzvkwcwoydmen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066308.3522234-631-189768376520268/AnsiballZ_systemd.py'
Nov 25 10:25:09 compute-0 sudo[201456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:09 compute-0 python3.9[201458]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:25:09 compute-0 systemd[1]: Reloading.
Nov 25 10:25:10 compute-0 systemd-sysv-generator[201489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:25:10 compute-0 systemd-rc-local-generator[201485]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:25:10 compute-0 sudo[201456]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:10 compute-0 sudo[201567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imhrrduouylbfdhjxebklumkfgeygied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066308.3522234-631-189768376520268/AnsiballZ_systemd.py'
Nov 25 10:25:10 compute-0 sudo[201567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:10 compute-0 python3.9[201569]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:25:10 compute-0 systemd[1]: Reloading.
Nov 25 10:25:11 compute-0 systemd-rc-local-generator[201596]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:25:11 compute-0 systemd-sysv-generator[201601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:25:11 compute-0 systemd[1]: Starting node_exporter container...
Nov 25 10:25:11 compute-0 podman[201610]: 2025-11-25 10:25:11.427901492 +0000 UTC m=+0.081714407 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 10:25:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd13409a925cdf0d6aee0439ae146b47651f6b3e54a0712f38a4c4abafa2be1/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd13409a925cdf0d6aee0439ae146b47651f6b3e54a0712f38a4c4abafa2be1/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:11 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.
Nov 25 10:25:11 compute-0 podman[201611]: 2025-11-25 10:25:11.489320416 +0000 UTC m=+0.138663053 container init 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.504Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.504Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.504Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.504Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.504Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.504Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.505Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=systemd
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.506Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 25 10:25:11 compute-0 node_exporter[201641]: ts=2025-11-25T10:25:11.507Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 25 10:25:11 compute-0 podman[201611]: 2025-11-25 10:25:11.515626741 +0000 UTC m=+0.164969358 container start 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:25:11 compute-0 podman[201611]: node_exporter
Nov 25 10:25:11 compute-0 systemd[1]: Started node_exporter container.
Nov 25 10:25:11 compute-0 sudo[201567]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:11 compute-0 podman[201653]: 2025-11-25 10:25:11.623006764 +0000 UTC m=+0.092472366 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:25:12 compute-0 sudo[201827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yupnecsdltwhfliiphdaiuinkzomvmcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066311.740414-655-101420616206652/AnsiballZ_systemd.py'
Nov 25 10:25:12 compute-0 sudo[201827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:12 compute-0 python3.9[201829]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:25:12 compute-0 systemd[1]: Stopping node_exporter container...
Nov 25 10:25:12 compute-0 systemd[1]: libpod-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope: Deactivated successfully.
Nov 25 10:25:12 compute-0 podman[201833]: 2025-11-25 10:25:12.588591298 +0000 UTC m=+0.090578822 container died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:25:12 compute-0 systemd[1]: 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4-4fd6be54c65fadad.timer: Deactivated successfully.
Nov 25 10:25:12 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.
Nov 25 10:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4-userdata-shm.mount: Deactivated successfully.
Nov 25 10:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fd13409a925cdf0d6aee0439ae146b47651f6b3e54a0712f38a4c4abafa2be1-merged.mount: Deactivated successfully.
Nov 25 10:25:12 compute-0 podman[201833]: 2025-11-25 10:25:12.727822956 +0000 UTC m=+0.229810470 container cleanup 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:25:12 compute-0 podman[201833]: node_exporter
Nov 25 10:25:12 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 25 10:25:12 compute-0 podman[201859]: node_exporter
Nov 25 10:25:12 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 25 10:25:12 compute-0 systemd[1]: Stopped node_exporter container.
Nov 25 10:25:12 compute-0 systemd[1]: Starting node_exporter container...
Nov 25 10:25:12 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd13409a925cdf0d6aee0439ae146b47651f6b3e54a0712f38a4c4abafa2be1/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fd13409a925cdf0d6aee0439ae146b47651f6b3e54a0712f38a4c4abafa2be1/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:12 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.
Nov 25 10:25:12 compute-0 podman[201872]: 2025-11-25 10:25:12.919752327 +0000 UTC m=+0.105137840 container init 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.933Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.933Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.933Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.933Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.933Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=arp
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=bcache
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=bonding
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=cpu
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=edac
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=filefd
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=netclass
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=netdev
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=netstat
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=nfs
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=nvme
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=softnet
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=systemd
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=xfs
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.934Z caller=node_exporter.go:117 level=info collector=zfs
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.935Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 25 10:25:12 compute-0 node_exporter[201887]: ts=2025-11-25T10:25:12.935Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 25 10:25:12 compute-0 podman[201872]: 2025-11-25 10:25:12.951246351 +0000 UTC m=+0.136631864 container start 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:25:13 compute-0 rsyslogd[1010]: imjournal from <np0005534753:node_exporter>: begin to drop messages due to rate-limiting
Nov 25 10:25:13 compute-0 podman[201872]: node_exporter
Nov 25 10:25:13 compute-0 systemd[1]: Started node_exporter container.
Nov 25 10:25:13 compute-0 sudo[201827]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:13 compute-0 podman[201897]: 2025-11-25 10:25:13.19432762 +0000 UTC m=+0.233301789 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:25:13 compute-0 sudo[202070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fprcddlhttamqowwlbgnmcxznqeenjyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066313.3685539-663-45499651132573/AnsiballZ_stat.py'
Nov 25 10:25:13 compute-0 sudo[202070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:13 compute-0 python3.9[202072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:25:13 compute-0 sudo[202070]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:14 compute-0 sudo[202193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvfkwuvyfzrdnykliguiiawhzhfacjmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066313.3685539-663-45499651132573/AnsiballZ_copy.py'
Nov 25 10:25:14 compute-0 sudo[202193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:14 compute-0 python3.9[202195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066313.3685539-663-45499651132573/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:25:14 compute-0 sudo[202193]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:14 compute-0 sudo[202345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwmbfztdqmpujxnnoyljmflsrcoawkyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066314.6753347-680-24751260035245/AnsiballZ_container_config_data.py'
Nov 25 10:25:14 compute-0 sudo[202345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:15 compute-0 python3.9[202347]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 25 10:25:15 compute-0 sudo[202345]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:15 compute-0 sudo[202497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqfkkvahprvkgnrcphzzqslzelcokjlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066315.360287-689-133158668180096/AnsiballZ_container_config_hash.py'
Nov 25 10:25:15 compute-0 sudo[202497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:15 compute-0 python3.9[202499]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:25:15 compute-0 sudo[202497]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:16 compute-0 sudo[202649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlijzuajsnhmwvhpudgdzvhwkujegqzu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066316.1159415-699-240980063035516/AnsiballZ_edpm_container_manage.py'
Nov 25 10:25:16 compute-0 sudo[202649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:16 compute-0 python3[202651]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:25:18 compute-0 podman[202684]: 2025-11-25 10:25:18.001691929 +0000 UTC m=+0.113113198 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:25:19 compute-0 podman[202665]: 2025-11-25 10:25:19.63448486 +0000 UTC m=+2.865108434 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 25 10:25:19 compute-0 podman[202789]: 2025-11-25 10:25:19.770238968 +0000 UTC m=+0.025057090 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 25 10:25:20 compute-0 podman[202789]: 2025-11-25 10:25:20.286381328 +0000 UTC m=+0.541199450 container create ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Nov 25 10:25:20 compute-0 python3[202651]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Nov 25 10:25:20 compute-0 sudo[202649]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:20 compute-0 sshd-session[202765]: Connection closed by authenticating user root 171.244.51.45 port 46032 [preauth]
Nov 25 10:25:20 compute-0 sudo[202977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgppubrufxgwtrlinojmjabrtoxwntdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066320.5830352-707-108834425677043/AnsiballZ_stat.py'
Nov 25 10:25:20 compute-0 sudo[202977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:21 compute-0 python3.9[202979]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:25:21 compute-0 sudo[202977]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:21 compute-0 sudo[203131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixdnkwoxfjrluujimnxkbfrbmllqiuct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066321.3222823-716-84260104205545/AnsiballZ_file.py'
Nov 25 10:25:21 compute-0 sudo[203131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:21 compute-0 python3.9[203133]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:21 compute-0 sudo[203131]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:22 compute-0 sudo[203282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayharzhszjmlctfqloffqnnadrhcsoxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066321.8743024-716-171496958669234/AnsiballZ_copy.py'
Nov 25 10:25:22 compute-0 sudo[203282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:22 compute-0 python3.9[203284]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066321.8743024-716-171496958669234/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:22 compute-0 sudo[203282]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:22 compute-0 sudo[203368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbjbgrcjombxcxirkzpwbjuqzadimkdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066321.8743024-716-171496958669234/AnsiballZ_systemd.py'
Nov 25 10:25:22 compute-0 sudo[203368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:22 compute-0 podman[203332]: 2025-11-25 10:25:22.791402992 +0000 UTC m=+0.064573895 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:25:23 compute-0 python3.9[203379]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:25:23 compute-0 systemd[1]: Reloading.
Nov 25 10:25:23 compute-0 systemd-rc-local-generator[203408]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:25:23 compute-0 systemd-sysv-generator[203411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:25:23 compute-0 sudo[203368]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:23 compute-0 sudo[203489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyqlxcjpowfymozdyyjhcwewfgtiiacq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066321.8743024-716-171496958669234/AnsiballZ_systemd.py'
Nov 25 10:25:23 compute-0 sudo[203489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:24 compute-0 python3.9[203491]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:25:24 compute-0 systemd[1]: Reloading.
Nov 25 10:25:24 compute-0 systemd-rc-local-generator[203521]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:25:24 compute-0 systemd-sysv-generator[203525]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:25:24 compute-0 systemd[1]: Starting podman_exporter container...
Nov 25 10:25:24 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab421302a89cd305fc314252ce5d18ffcee1b1be9b73a7ca2169e86cef07f69d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab421302a89cd305fc314252ce5d18ffcee1b1be9b73a7ca2169e86cef07f69d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.
Nov 25 10:25:25 compute-0 podman[203531]: 2025-11-25 10:25:25.02664531 +0000 UTC m=+0.439322875 container init ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:25:25 compute-0 podman_exporter[203546]: ts=2025-11-25T10:25:25.044Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 25 10:25:25 compute-0 podman_exporter[203546]: ts=2025-11-25T10:25:25.045Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 25 10:25:25 compute-0 podman_exporter[203546]: ts=2025-11-25T10:25:25.045Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 25 10:25:25 compute-0 podman_exporter[203546]: ts=2025-11-25T10:25:25.045Z caller=handler.go:105 level=info collector=container
Nov 25 10:25:25 compute-0 podman[203531]: 2025-11-25 10:25:25.051325179 +0000 UTC m=+0.464002734 container start ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:25:25 compute-0 systemd[1]: Starting Podman API Service...
Nov 25 10:25:25 compute-0 systemd[1]: Started Podman API Service.
Nov 25 10:25:25 compute-0 podman[203557]: time="2025-11-25T10:25:25Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 25 10:25:25 compute-0 podman[203557]: time="2025-11-25T10:25:25Z" level=info msg="Setting parallel job count to 25"
Nov 25 10:25:25 compute-0 podman[203557]: time="2025-11-25T10:25:25Z" level=info msg="Using sqlite as database backend"
Nov 25 10:25:25 compute-0 podman[203531]: podman_exporter
Nov 25 10:25:25 compute-0 systemd[1]: Started podman_exporter container.
Nov 25 10:25:25 compute-0 podman[203557]: time="2025-11-25T10:25:25Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 25 10:25:25 compute-0 podman[203557]: time="2025-11-25T10:25:25Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 25 10:25:25 compute-0 podman[203557]: time="2025-11-25T10:25:25Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 25 10:25:25 compute-0 podman[203557]: @ - - [25/Nov/2025:10:25:25 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 25 10:25:25 compute-0 podman[203557]: time="2025-11-25T10:25:25Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:25:25 compute-0 sudo[203489]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:25 compute-0 podman[203557]: @ - - [25/Nov/2025:10:25:25 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19586 "" "Go-http-client/1.1"
Nov 25 10:25:25 compute-0 podman[203555]: 2025-11-25 10:25:25.174522186 +0000 UTC m=+0.111235525 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:25:25 compute-0 podman_exporter[203546]: ts=2025-11-25T10:25:25.175Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 25 10:25:25 compute-0 podman_exporter[203546]: ts=2025-11-25T10:25:25.176Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 25 10:25:25 compute-0 podman_exporter[203546]: ts=2025-11-25T10:25:25.176Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 25 10:25:25 compute-0 systemd[1]: ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030-1bcadd3c7f1fb130.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:25:25 compute-0 systemd[1]: ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030-1bcadd3c7f1fb130.service: Failed with result 'exit-code'.
Nov 25 10:25:25 compute-0 sudo[203741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnzddmfxztswzwgcleciayyogstbfyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066325.3146622-740-142493536994281/AnsiballZ_systemd.py'
Nov 25 10:25:25 compute-0 sudo[203741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:26 compute-0 python3.9[203743]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:25:26 compute-0 systemd[1]: Stopping podman_exporter container...
Nov 25 10:25:26 compute-0 podman[203557]: @ - - [25/Nov/2025:10:25:25 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Nov 25 10:25:26 compute-0 systemd[1]: libpod-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope: Deactivated successfully.
Nov 25 10:25:26 compute-0 podman[203747]: 2025-11-25 10:25:26.19164677 +0000 UTC m=+0.094586217 container died ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:25:26 compute-0 systemd[1]: ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030-1bcadd3c7f1fb130.timer: Deactivated successfully.
Nov 25 10:25:26 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.
Nov 25 10:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030-userdata-shm.mount: Deactivated successfully.
Nov 25 10:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab421302a89cd305fc314252ce5d18ffcee1b1be9b73a7ca2169e86cef07f69d-merged.mount: Deactivated successfully.
Nov 25 10:25:27 compute-0 podman[203747]: 2025-11-25 10:25:27.562064127 +0000 UTC m=+1.465003564 container cleanup ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:25:27 compute-0 podman[203747]: podman_exporter
Nov 25 10:25:27 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 25 10:25:27 compute-0 podman[203776]: podman_exporter
Nov 25 10:25:27 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 25 10:25:27 compute-0 systemd[1]: Stopped podman_exporter container.
Nov 25 10:25:27 compute-0 systemd[1]: Starting podman_exporter container...
Nov 25 10:25:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab421302a89cd305fc314252ce5d18ffcee1b1be9b73a7ca2169e86cef07f69d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab421302a89cd305fc314252ce5d18ffcee1b1be9b73a7ca2169e86cef07f69d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:28 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.
Nov 25 10:25:28 compute-0 podman[203789]: 2025-11-25 10:25:28.093708292 +0000 UTC m=+0.430252215 container init ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:25:28 compute-0 podman_exporter[203804]: ts=2025-11-25T10:25:28.107Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 25 10:25:28 compute-0 podman_exporter[203804]: ts=2025-11-25T10:25:28.107Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 25 10:25:28 compute-0 podman_exporter[203804]: ts=2025-11-25T10:25:28.107Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 25 10:25:28 compute-0 podman_exporter[203804]: ts=2025-11-25T10:25:28.107Z caller=handler.go:105 level=info collector=container
Nov 25 10:25:28 compute-0 podman[203557]: @ - - [25/Nov/2025:10:25:28 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 25 10:25:28 compute-0 podman[203557]: time="2025-11-25T10:25:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:25:28 compute-0 podman[203789]: 2025-11-25 10:25:28.119117711 +0000 UTC m=+0.455661624 container start ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:25:28 compute-0 podman[203789]: podman_exporter
Nov 25 10:25:28 compute-0 podman[203557]: @ - - [25/Nov/2025:10:25:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19588 "" "Go-http-client/1.1"
Nov 25 10:25:28 compute-0 podman_exporter[203804]: ts=2025-11-25T10:25:28.204Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 25 10:25:28 compute-0 podman_exporter[203804]: ts=2025-11-25T10:25:28.205Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 25 10:25:28 compute-0 podman_exporter[203804]: ts=2025-11-25T10:25:28.205Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 25 10:25:28 compute-0 systemd[1]: Started podman_exporter container.
Nov 25 10:25:28 compute-0 podman[203814]: 2025-11-25 10:25:28.259332817 +0000 UTC m=+0.130052335 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:25:28 compute-0 sudo[203741]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:28 compute-0 sudo[203987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-choivjhoejgkyrhlbfexdniiwzuhefqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066328.4426508-748-180312539846680/AnsiballZ_stat.py'
Nov 25 10:25:28 compute-0 sudo[203987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:28 compute-0 python3.9[203989]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:25:28 compute-0 sudo[203987]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:29 compute-0 sudo[204110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uygpllydodzlbqauzgmkxbzjvwkxfiop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066328.4426508-748-180312539846680/AnsiballZ_copy.py'
Nov 25 10:25:29 compute-0 sudo[204110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:29 compute-0 python3.9[204112]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066328.4426508-748-180312539846680/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:25:29 compute-0 sudo[204110]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:30 compute-0 sudo[204262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zekuwfcsbtkesohkxjdqslvdpfdhtsdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066329.8255713-765-200618447948175/AnsiballZ_container_config_data.py'
Nov 25 10:25:30 compute-0 sudo[204262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:30 compute-0 python3.9[204264]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 25 10:25:30 compute-0 sudo[204262]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:30 compute-0 sudo[204414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucbdwvirsuzihwnsjuxkvjjhjubwymqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066330.5671563-774-228660074881693/AnsiballZ_container_config_hash.py'
Nov 25 10:25:30 compute-0 sudo[204414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:31 compute-0 python3.9[204416]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:25:31 compute-0 sudo[204414]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:31 compute-0 sudo[204566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekeabpfptfqxcjyzbkwozfwhztqlmgnt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066331.3877916-784-117245611827430/AnsiballZ_edpm_container_manage.py'
Nov 25 10:25:31 compute-0 sudo[204566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:32 compute-0 python3[204568]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:25:32 compute-0 podman[204595]: 2025-11-25 10:25:32.939467943 +0000 UTC m=+0.052707714 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 10:25:32 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-6b823eef448fbc51.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:25:32 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-6b823eef448fbc51.service: Failed with result 'exit-code'.
Nov 25 10:25:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:25:36.013 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:25:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:25:36.014 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:25:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:25:36.015 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:25:36 compute-0 rsyslogd[1010]: imjournal: 172 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 25 10:25:36 compute-0 podman[204582]: 2025-11-25 10:25:36.581676216 +0000 UTC m=+4.505677946 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 25 10:25:36 compute-0 podman[204695]: 2025-11-25 10:25:36.691224093 +0000 UTC m=+0.021039255 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 25 10:25:37 compute-0 podman[204695]: 2025-11-25 10:25:37.117782131 +0000 UTC m=+0.447597263 container create 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, version=9.6, config_id=edpm)
Nov 25 10:25:37 compute-0 python3[204568]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 25 10:25:37 compute-0 sudo[204566]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:37 compute-0 sudo[204883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufejhnvnrnuwpffjgfiscwbxilghnstt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066337.4157028-792-120474331802402/AnsiballZ_stat.py'
Nov 25 10:25:37 compute-0 sudo[204883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:37 compute-0 python3.9[204885]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:25:37 compute-0 sudo[204883]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:38 compute-0 sudo[205037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vppqrywvsefhifgfpfffxpvacanqkumm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066338.1478834-801-15045417937925/AnsiballZ_file.py'
Nov 25 10:25:38 compute-0 sudo[205037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:38 compute-0 python3.9[205039]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:38 compute-0 sudo[205037]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:39 compute-0 sudo[205188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivddoytqzuljdmkkqdnsmxmcfpzcllwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066338.7834609-801-256987978300654/AnsiballZ_copy.py'
Nov 25 10:25:39 compute-0 sudo[205188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:39 compute-0 python3.9[205190]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066338.7834609-801-256987978300654/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:39 compute-0 sudo[205188]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:39 compute-0 sudo[205264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onkqxpbhrsilqgqefzsguhfifjojbbpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066338.7834609-801-256987978300654/AnsiballZ_systemd.py'
Nov 25 10:25:39 compute-0 sudo[205264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:39 compute-0 python3.9[205266]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:25:39 compute-0 systemd[1]: Reloading.
Nov 25 10:25:40 compute-0 systemd-rc-local-generator[205293]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:25:40 compute-0 systemd-sysv-generator[205296]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:25:40 compute-0 sudo[205264]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:40 compute-0 sudo[205374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzszyunojbfqgzzcejmuwtraczxcuixh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066338.7834609-801-256987978300654/AnsiballZ_systemd.py'
Nov 25 10:25:40 compute-0 sudo[205374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:40 compute-0 python3.9[205376]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:25:41 compute-0 systemd[1]: Reloading.
Nov 25 10:25:41 compute-0 systemd-rc-local-generator[205404]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:25:41 compute-0 systemd-sysv-generator[205409]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:25:41 compute-0 systemd[1]: Starting openstack_network_exporter container...
Nov 25 10:25:41 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9a1e690f1d5529bfba48e84b19be2655ed6e8ee1f197c4cc996bb401a47b42/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:41 compute-0 podman[205430]: 2025-11-25 10:25:41.667539054 +0000 UTC m=+0.182395508 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9a1e690f1d5529bfba48e84b19be2655ed6e8ee1f197c4cc996bb401a47b42/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9a1e690f1d5529bfba48e84b19be2655ed6e8ee1f197c4cc996bb401a47b42/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:41 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.
Nov 25 10:25:41 compute-0 podman[205416]: 2025-11-25 10:25:41.936053402 +0000 UTC m=+0.508814399 container init 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *bridge.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *coverage.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *datapath.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *iface.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *memory.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *ovnnorthd.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *ovn.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *ovsdbserver.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *pmd_perf.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *pmd_rxq.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: INFO    10:25:41 main.go:48: registering *vswitch.Collector
Nov 25 10:25:41 compute-0 openstack_network_exporter[205448]: NOTICE  10:25:41 main.go:76: listening on https://:9105/metrics
Nov 25 10:25:41 compute-0 podman[205416]: 2025-11-25 10:25:41.962363418 +0000 UTC m=+0.535124395 container start 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9)
Nov 25 10:25:42 compute-0 podman[205416]: openstack_network_exporter
Nov 25 10:25:42 compute-0 systemd[1]: Started openstack_network_exporter container.
Nov 25 10:25:42 compute-0 sudo[205374]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:42 compute-0 podman[205462]: 2025-11-25 10:25:42.184899927 +0000 UTC m=+0.211766301 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.buildah.version=1.33.7, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 10:25:42 compute-0 sudo[205634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpywsgfdagfuextuyybswpjszuugtzxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066342.30691-825-98418788918946/AnsiballZ_systemd.py'
Nov 25 10:25:42 compute-0 sudo[205634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:42 compute-0 python3.9[205636]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:25:42 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Nov 25 10:25:43 compute-0 systemd[1]: libpod-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope: Deactivated successfully.
Nov 25 10:25:43 compute-0 podman[205640]: 2025-11-25 10:25:43.315878689 +0000 UTC m=+0.350346389 container died 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 25 10:25:43 compute-0 systemd[1]: 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b-7fa62c8d627810a0.timer: Deactivated successfully.
Nov 25 10:25:43 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.
Nov 25 10:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b-userdata-shm.mount: Deactivated successfully.
Nov 25 10:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d9a1e690f1d5529bfba48e84b19be2655ed6e8ee1f197c4cc996bb401a47b42-merged.mount: Deactivated successfully.
Nov 25 10:25:43 compute-0 podman[205657]: 2025-11-25 10:25:43.591577596 +0000 UTC m=+0.259532673 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:25:46 compute-0 podman[205640]: 2025-11-25 10:25:46.033873149 +0000 UTC m=+3.068340849 container cleanup 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Nov 25 10:25:46 compute-0 podman[205640]: openstack_network_exporter
Nov 25 10:25:46 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 25 10:25:46 compute-0 podman[205694]: openstack_network_exporter
Nov 25 10:25:46 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 25 10:25:46 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Nov 25 10:25:46 compute-0 systemd[1]: Starting openstack_network_exporter container...
Nov 25 10:25:46 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9a1e690f1d5529bfba48e84b19be2655ed6e8ee1f197c4cc996bb401a47b42/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9a1e690f1d5529bfba48e84b19be2655ed6e8ee1f197c4cc996bb401a47b42/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9a1e690f1d5529bfba48e84b19be2655ed6e8ee1f197c4cc996bb401a47b42/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:25:46 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.
Nov 25 10:25:46 compute-0 podman[205707]: 2025-11-25 10:25:46.673201295 +0000 UTC m=+0.546348158 container init 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *bridge.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *coverage.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *datapath.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *iface.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *memory.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *ovnnorthd.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *ovn.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *ovsdbserver.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *pmd_perf.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *pmd_rxq.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: INFO    10:25:46 main.go:48: registering *vswitch.Collector
Nov 25 10:25:46 compute-0 openstack_network_exporter[205722]: NOTICE  10:25:46 main.go:76: listening on https://:9105/metrics
Nov 25 10:25:46 compute-0 podman[205707]: 2025-11-25 10:25:46.705319178 +0000 UTC m=+0.578466021 container start 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public)
Nov 25 10:25:46 compute-0 podman[205707]: openstack_network_exporter
Nov 25 10:25:46 compute-0 systemd[1]: Started openstack_network_exporter container.
Nov 25 10:25:46 compute-0 sudo[205634]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:46 compute-0 podman[205732]: 2025-11-25 10:25:46.848515639 +0000 UTC m=+0.134356819 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 25 10:25:47 compute-0 sudo[205903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhjavuzlnfpiqnmeyiyczoaebbshhovc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066346.9637635-833-34281660976762/AnsiballZ_find.py'
Nov 25 10:25:47 compute-0 sudo[205903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:47 compute-0 python3.9[205905]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:25:47 compute-0 sudo[205903]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:48 compute-0 sudo[206066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcefufkwhnjldnaorescegrzdlxovilt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066347.7625988-843-203673471782582/AnsiballZ_podman_container_info.py'
Nov 25 10:25:48 compute-0 sudo[206066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:48 compute-0 podman[206029]: 2025-11-25 10:25:48.266483702 +0000 UTC m=+0.134184074 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:25:48 compute-0 python3.9[206075]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 25 10:25:48 compute-0 sudo[206066]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:49 compute-0 sudo[206244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuejtiiqzvgsfgwgbcinzvrspgwlxcmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066348.6422558-851-254928050215775/AnsiballZ_podman_container_exec.py'
Nov 25 10:25:49 compute-0 sudo[206244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:49 compute-0 python3.9[206246]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:25:49 compute-0 systemd[1]: Started libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope.
Nov 25 10:25:49 compute-0 podman[206247]: 2025-11-25 10:25:49.813685665 +0000 UTC m=+0.490589247 container exec 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:25:50 compute-0 podman[206247]: 2025-11-25 10:25:50.03258837 +0000 UTC m=+0.709491952 container exec_died 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:25:50 compute-0 systemd[1]: libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope: Deactivated successfully.
Nov 25 10:25:50 compute-0 sudo[206244]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:50 compute-0 sudo[206423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjkirrxdsuozqmplbomsqqaayueixqwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066350.4547133-859-137963471235096/AnsiballZ_podman_container_exec.py'
Nov 25 10:25:50 compute-0 sudo[206423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:50 compute-0 python3.9[206425]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:25:51 compute-0 systemd[1]: Started libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope.
Nov 25 10:25:51 compute-0 podman[206426]: 2025-11-25 10:25:51.319765868 +0000 UTC m=+0.306846341 container exec 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 25 10:25:51 compute-0 podman[206426]: 2025-11-25 10:25:51.58349825 +0000 UTC m=+0.570578733 container exec_died 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 10:25:52 compute-0 systemd[1]: libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope: Deactivated successfully.
Nov 25 10:25:52 compute-0 sudo[206423]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:52 compute-0 sudo[206605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzhvstmcmsbtrloaowcdtckhheqgzsmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066352.2333248-867-155349137464335/AnsiballZ_file.py'
Nov 25 10:25:52 compute-0 sudo[206605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:52 compute-0 python3.9[206607]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:52 compute-0 sudo[206605]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:52 compute-0 podman[206629]: 2025-11-25 10:25:52.947971646 +0000 UTC m=+0.060376074 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 10:25:53 compute-0 sudo[206778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzpulquceanvigfxrwzcnkpbsrdngzhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066353.006884-876-120263102772748/AnsiballZ_podman_container_info.py'
Nov 25 10:25:53 compute-0 sudo[206778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:53 compute-0 python3.9[206780]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 25 10:25:53 compute-0 sudo[206778]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:54 compute-0 sudo[206943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bavksvwoxkighzkztjknxokkwkjsglvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066353.743849-884-176701435872243/AnsiballZ_podman_container_exec.py'
Nov 25 10:25:54 compute-0 sudo[206943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:54 compute-0 python3.9[206945]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:25:54 compute-0 systemd[1]: Started libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope.
Nov 25 10:25:54 compute-0 podman[206946]: 2025-11-25 10:25:54.437607567 +0000 UTC m=+0.162971330 container exec 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 10:25:54 compute-0 podman[206965]: 2025-11-25 10:25:54.508801861 +0000 UTC m=+0.054466435 container exec_died 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 10:25:54 compute-0 podman[206946]: 2025-11-25 10:25:54.603829569 +0000 UTC m=+0.329193322 container exec_died 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:25:54 compute-0 systemd[1]: libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope: Deactivated successfully.
Nov 25 10:25:54 compute-0 sudo[206943]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:55 compute-0 sudo[207127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoizndnawifbbiwngerfrsuptoltodou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066355.293192-892-42010783217155/AnsiballZ_podman_container_exec.py'
Nov 25 10:25:55 compute-0 sudo[207127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:55 compute-0 python3.9[207129]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:25:55 compute-0 systemd[1]: Started libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope.
Nov 25 10:25:55 compute-0 podman[207130]: 2025-11-25 10:25:55.969574263 +0000 UTC m=+0.162482356 container exec 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 10:25:56 compute-0 podman[207150]: 2025-11-25 10:25:56.038899004 +0000 UTC m=+0.052083117 container exec_died 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 10:25:56 compute-0 podman[207130]: 2025-11-25 10:25:56.070072349 +0000 UTC m=+0.262980422 container exec_died 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:25:56 compute-0 systemd[1]: libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope: Deactivated successfully.
Nov 25 10:25:56 compute-0 sudo[207127]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:56 compute-0 nova_compute[189381]: 2025-11-25 10:25:56.420 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:56 compute-0 nova_compute[189381]: 2025-11-25 10:25:56.422 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:56 compute-0 nova_compute[189381]: 2025-11-25 10:25:56.441 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:56 compute-0 nova_compute[189381]: 2025-11-25 10:25:56.442 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:56 compute-0 nova_compute[189381]: 2025-11-25 10:25:56.442 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:25:56 compute-0 sudo[207311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfibgephtohhvjnfwisvugyztrrlboyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066356.401903-900-253671987312584/AnsiballZ_file.py'
Nov 25 10:25:56 compute-0 sudo[207311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:56 compute-0 python3.9[207313]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:25:56 compute-0 sudo[207311]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:57 compute-0 nova_compute[189381]: 2025-11-25 10:25:57.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:57 compute-0 nova_compute[189381]: 2025-11-25 10:25:57.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:57 compute-0 nova_compute[189381]: 2025-11-25 10:25:57.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:57 compute-0 sudo[207463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gixckquwfjdjglfebwqiloypnkrcewro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066357.1673915-909-164934236565292/AnsiballZ_podman_container_info.py'
Nov 25 10:25:57 compute-0 sudo[207463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:57 compute-0 python3.9[207465]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 25 10:25:57 compute-0 sudo[207463]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.043 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.043 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.044 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.074 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.075 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.075 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.075 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.241 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.242 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5863MB free_disk=72.2625732421875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:25:58 compute-0 sudo[207628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmjdycfunhfsxzjyjrnpkvqztyfpwngi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066357.9594865-917-55539208925128/AnsiballZ_podman_container_exec.py'
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.243 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.243 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:25:58 compute-0 sudo[207628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.330 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.331 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.360 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.376 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.378 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:25:58 compute-0 nova_compute[189381]: 2025-11-25 10:25:58.378 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:25:58 compute-0 python3.9[207630]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:25:58 compute-0 systemd[1]: Started libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope.
Nov 25 10:25:58 compute-0 podman[207631]: 2025-11-25 10:25:58.681472767 +0000 UTC m=+0.112866711 container exec b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:25:58 compute-0 podman[207652]: 2025-11-25 10:25:58.755968016 +0000 UTC m=+0.058646385 container exec_died b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:25:58 compute-0 podman[207648]: 2025-11-25 10:25:58.758361125 +0000 UTC m=+0.067219561 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:25:58 compute-0 podman[207631]: 2025-11-25 10:25:58.894003989 +0000 UTC m=+0.325397943 container exec_died b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 10:25:58 compute-0 systemd[1]: libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope: Deactivated successfully.
Nov 25 10:25:59 compute-0 sudo[207628]: pam_unix(sudo:session): session closed for user root
Nov 25 10:25:59 compute-0 sudo[207835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siicfldrhzldtnrwchptlpolyrmvvirv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066359.3804114-925-179073249232369/AnsiballZ_podman_container_exec.py'
Nov 25 10:25:59 compute-0 sudo[207835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:25:59 compute-0 python3.9[207837]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:00 compute-0 systemd[1]: Started libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope.
Nov 25 10:26:00 compute-0 podman[207838]: 2025-11-25 10:26:00.208939503 +0000 UTC m=+0.302209468 container exec b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Nov 25 10:26:00 compute-0 podman[207857]: 2025-11-25 10:26:00.461816324 +0000 UTC m=+0.237250113 container exec_died b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 25 10:26:00 compute-0 podman[207838]: 2025-11-25 10:26:00.559526678 +0000 UTC m=+0.652796633 container exec_died b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd)
Nov 25 10:26:00 compute-0 systemd[1]: libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope: Deactivated successfully.
Nov 25 10:26:00 compute-0 sudo[207835]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:01 compute-0 sudo[208019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnwypjmqqfqsdjaoyjqyqvdhedhxjhtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066360.9154708-933-44417665236060/AnsiballZ_file.py'
Nov 25 10:26:01 compute-0 sudo[208019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:01 compute-0 python3.9[208021]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:01 compute-0 sudo[208019]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:01 compute-0 sudo[208171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aofkgjhkkscioxfjljrxzaxzeshtkddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066361.6262777-942-260448892808967/AnsiballZ_podman_container_info.py'
Nov 25 10:26:01 compute-0 sudo[208171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:02 compute-0 python3.9[208173]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 25 10:26:02 compute-0 sudo[208171]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:02 compute-0 sudo[208336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rautirddidaavsmiipvadqkamczhaidn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066362.327814-950-235302788806475/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:02 compute-0 sudo[208336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:02 compute-0 python3.9[208338]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:03 compute-0 systemd[1]: Started libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope.
Nov 25 10:26:03 compute-0 podman[208339]: 2025-11-25 10:26:03.207134747 +0000 UTC m=+0.280333350 container exec 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:26:03 compute-0 podman[208364]: 2025-11-25 10:26:03.276725565 +0000 UTC m=+0.056640227 container exec_died 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:26:03 compute-0 systemd[1]: libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope: Deactivated successfully.
Nov 25 10:26:03 compute-0 podman[208339]: 2025-11-25 10:26:03.340720001 +0000 UTC m=+0.413918614 container exec_died 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:26:03 compute-0 podman[208355]: 2025-11-25 10:26:03.536011649 +0000 UTC m=+0.326090853 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 25 10:26:03 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-6b823eef448fbc51.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:26:03 compute-0 systemd[1]: 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d-6b823eef448fbc51.service: Failed with result 'exit-code'.
Nov 25 10:26:03 compute-0 sudo[208336]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:04 compute-0 sudo[208538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkwrsepnmrxnytdcflfpofvtqbcipmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066363.771351-958-170144220630350/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:04 compute-0 sudo[208538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:04 compute-0 python3.9[208540]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:04 compute-0 systemd[1]: Started libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope.
Nov 25 10:26:04 compute-0 podman[208541]: 2025-11-25 10:26:04.382582966 +0000 UTC m=+0.108060264 container exec 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 25 10:26:04 compute-0 podman[208561]: 2025-11-25 10:26:04.578922103 +0000 UTC m=+0.183649954 container exec_died 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 10:26:04 compute-0 podman[208541]: 2025-11-25 10:26:04.925863213 +0000 UTC m=+0.651340481 container exec_died 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:26:04 compute-0 systemd[1]: libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope: Deactivated successfully.
Nov 25 10:26:05 compute-0 sudo[208538]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:05 compute-0 sudo[208723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeoweswkbgiuenjlyvhjamrlgqlcbgyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066365.3284488-966-25156841863581/AnsiballZ_file.py'
Nov 25 10:26:05 compute-0 sudo[208723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:06 compute-0 python3.9[208725]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:06 compute-0 sudo[208723]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:06 compute-0 sudo[208875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igvjkgaquibxxexwkmbkfxdruptmgvsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066366.309969-975-67372619105841/AnsiballZ_podman_container_info.py'
Nov 25 10:26:06 compute-0 sudo[208875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:06 compute-0 python3.9[208877]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 25 10:26:06 compute-0 sudo[208875]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:07 compute-0 sudo[209040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drotewsovcwdrrsrsfniqzxarzjqcvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066367.0661256-983-244294312190273/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:07 compute-0 sudo[209040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:07 compute-0 python3.9[209042]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:08 compute-0 systemd[1]: Started libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope.
Nov 25 10:26:08 compute-0 podman[209043]: 2025-11-25 10:26:08.144836378 +0000 UTC m=+0.353534712 container exec 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:26:08 compute-0 podman[209062]: 2025-11-25 10:26:08.390764939 +0000 UTC m=+0.233376602 container exec_died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:26:08 compute-0 podman[209043]: 2025-11-25 10:26:08.789770905 +0000 UTC m=+0.998469219 container exec_died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:26:08 compute-0 systemd[1]: libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope: Deactivated successfully.
Nov 25 10:26:08 compute-0 sudo[209040]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:09 compute-0 sudo[209224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiqgzhzrpvbdgxsawvmauivwvzylhkky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066369.1315596-991-5504624557300/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:09 compute-0 sudo[209224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:09 compute-0 python3.9[209226]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:10 compute-0 systemd[1]: Started libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope.
Nov 25 10:26:10 compute-0 podman[209227]: 2025-11-25 10:26:10.141303951 +0000 UTC m=+0.494544041 container exec 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:26:10 compute-0 podman[209246]: 2025-11-25 10:26:10.284757069 +0000 UTC m=+0.131380553 container exec_died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:26:10 compute-0 podman[209227]: 2025-11-25 10:26:10.335892197 +0000 UTC m=+0.689132267 container exec_died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:26:10 compute-0 systemd[1]: libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope: Deactivated successfully.
Nov 25 10:26:10 compute-0 sudo[209224]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:10 compute-0 sudo[209408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssdrcziwwjreaiabiigxgrgjxiedzxyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066370.6709912-999-62938777918239/AnsiballZ_file.py'
Nov 25 10:26:10 compute-0 sudo[209408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:11 compute-0 python3.9[209410]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:11 compute-0 sudo[209408]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:11 compute-0 sudo[209560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xivdtpbwzfaydrbecgpwgjbbgskvkvis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066371.4152577-1008-185806397606817/AnsiballZ_podman_container_info.py'
Nov 25 10:26:11 compute-0 sudo[209560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:11 compute-0 python3.9[209562]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 25 10:26:11 compute-0 podman[209563]: 2025-11-25 10:26:11.973397453 +0000 UTC m=+0.089904622 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:26:12 compute-0 sudo[209560]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:12 compute-0 sudo[209745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeavuphepfaalubnnvqjqtihpmehoncd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066372.3239276-1016-211178687311785/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:12 compute-0 sudo[209745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:12 compute-0 python3.9[209747]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:12 compute-0 systemd[1]: Started libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope.
Nov 25 10:26:12 compute-0 podman[209748]: 2025-11-25 10:26:12.876814152 +0000 UTC m=+0.076702413 container exec ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:26:12 compute-0 podman[209748]: 2025-11-25 10:26:12.906191246 +0000 UTC m=+0.106079507 container exec_died ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:26:12 compute-0 systemd[1]: libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope: Deactivated successfully.
Nov 25 10:26:12 compute-0 sudo[209745]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:13 compute-0 sudo[209928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brnuimpesecmbvotkcuuqesrpoutmfub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066373.0853212-1024-220882988935700/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:13 compute-0 sudo[209928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:13 compute-0 python3.9[209930]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:13 compute-0 systemd[1]: Started libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope.
Nov 25 10:26:13 compute-0 podman[209931]: 2025-11-25 10:26:13.647167711 +0000 UTC m=+0.072975616 container exec ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:26:13 compute-0 podman[209931]: 2025-11-25 10:26:13.682990849 +0000 UTC m=+0.108798724 container exec_died ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:26:13 compute-0 systemd[1]: libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope: Deactivated successfully.
Nov 25 10:26:13 compute-0 sudo[209928]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:14 compute-0 sudo[210112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcazmkprtvqkfbmotuxfobovkqwdocct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066373.8783157-1032-98648495619096/AnsiballZ_file.py'
Nov 25 10:26:14 compute-0 sudo[210112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:14 compute-0 python3.9[210114]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:14 compute-0 sudo[210112]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:14 compute-0 sudo[210264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzzjeukcqiwjjolohgrseprkcdvbycwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066374.5493777-1041-183600423099446/AnsiballZ_podman_container_info.py'
Nov 25 10:26:14 compute-0 sudo[210264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:14 compute-0 python3.9[210266]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 25 10:26:15 compute-0 sudo[210264]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:15 compute-0 sudo[210429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhuwhsswddjyryzkaxawzsnarhtvrgso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066375.2305691-1049-95140132867761/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:15 compute-0 sudo[210429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:15 compute-0 python3.9[210431]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:15 compute-0 systemd[1]: Started libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope.
Nov 25 10:26:15 compute-0 podman[210432]: 2025-11-25 10:26:15.844901622 +0000 UTC m=+0.129742297 container exec 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41)
Nov 25 10:26:15 compute-0 podman[210451]: 2025-11-25 10:26:15.911443842 +0000 UTC m=+0.054564987 container exec_died 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Nov 25 10:26:15 compute-0 podman[210432]: 2025-11-25 10:26:15.930923291 +0000 UTC m=+0.215763966 container exec_died 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:26:15 compute-0 systemd[1]: libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope: Deactivated successfully.
Nov 25 10:26:16 compute-0 sudo[210429]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:16 compute-0 sudo[210626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucewcjomftmhvhvztozuytxxwawidtzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066376.1844237-1057-96886984856347/AnsiballZ_podman_container_exec.py'
Nov 25 10:26:16 compute-0 sudo[210626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:16 compute-0 podman[210587]: 2025-11-25 10:26:16.526382108 +0000 UTC m=+0.064176953 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:26:16 compute-0 python3.9[210639]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:26:16 compute-0 systemd[1]: Started libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope.
Nov 25 10:26:16 compute-0 podman[210640]: 2025-11-25 10:26:16.925475177 +0000 UTC m=+0.142311487 container exec 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible)
Nov 25 10:26:17 compute-0 podman[210640]: 2025-11-25 10:26:17.038918974 +0000 UTC m=+0.255755274 container exec_died 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Nov 25 10:26:17 compute-0 podman[210653]: 2025-11-25 10:26:17.225527472 +0000 UTC m=+0.344294036 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 25 10:26:17 compute-0 systemd[1]: libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope: Deactivated successfully.
Nov 25 10:26:17 compute-0 sudo[210626]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:17 compute-0 sudo[210841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxoxykkvxogscxwzzjcmhvefqwrohju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066377.4329782-1065-82299092062148/AnsiballZ_file.py'
Nov 25 10:26:17 compute-0 sudo[210841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:17 compute-0 python3.9[210843]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:17 compute-0 sudo[210841]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:18 compute-0 sudo[211006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpglxmammblwwurmnngbbdkbncrgmqvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066378.1994798-1074-162643711042207/AnsiballZ_file.py'
Nov 25 10:26:18 compute-0 sudo[211006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:18 compute-0 podman[210967]: 2025-11-25 10:26:18.54268563 +0000 UTC m=+0.095905035 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 10:26:18 compute-0 python3.9[211014]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:18 compute-0 sudo[211006]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:19 compute-0 sudo[211171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsfpubeyglfqtgenqzpcaagcxaqllweg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066378.8963366-1082-52254600348783/AnsiballZ_stat.py'
Nov 25 10:26:19 compute-0 sudo[211171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:19 compute-0 python3.9[211173]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:19 compute-0 sudo[211171]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:19 compute-0 sudo[211294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeckpkhmswrxajedrcffuzhjtgrwlvlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066378.8963366-1082-52254600348783/AnsiballZ_copy.py'
Nov 25 10:26:19 compute-0 sudo[211294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:20 compute-0 python3.9[211296]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066378.8963366-1082-52254600348783/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:20 compute-0 sudo[211294]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:20 compute-0 sudo[211446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icwstgmxxgaquvpippehkoautskhmpsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066380.4123623-1098-224952526759103/AnsiballZ_file.py'
Nov 25 10:26:20 compute-0 sudo[211446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:20 compute-0 python3.9[211448]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:20 compute-0 sudo[211446]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:21 compute-0 sudo[211598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykmlgoddicfleqvgpamebpiukgynxfxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066381.1270714-1106-125622261796869/AnsiballZ_stat.py'
Nov 25 10:26:21 compute-0 sudo[211598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:21 compute-0 python3.9[211600]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:21 compute-0 sudo[211598]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:21 compute-0 sudo[211676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqmddtiozgrbckldccupemkugvdzbjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066381.1270714-1106-125622261796869/AnsiballZ_file.py'
Nov 25 10:26:21 compute-0 sudo[211676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:22 compute-0 python3.9[211678]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:22 compute-0 sudo[211676]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:22 compute-0 sudo[211828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xollfplvcehyevgyrpybvsuyyefznvtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066382.2555056-1118-183177034741169/AnsiballZ_stat.py'
Nov 25 10:26:22 compute-0 sudo[211828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:22 compute-0 python3.9[211830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:22 compute-0 sudo[211828]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:23 compute-0 sudo[211917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weklcmpprvddhqvncbfasarcaoinabbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066382.2555056-1118-183177034741169/AnsiballZ_file.py'
Nov 25 10:26:23 compute-0 sudo[211917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:23 compute-0 podman[211880]: 2025-11-25 10:26:23.047265986 +0000 UTC m=+0.056095242 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 25 10:26:23 compute-0 python3.9[211925]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yuihn8up recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:23 compute-0 sudo[211917]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:23 compute-0 sudo[212076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcdvciaheeznahynybppeowagfanqnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066383.4029765-1130-173510113842578/AnsiballZ_stat.py'
Nov 25 10:26:23 compute-0 sudo[212076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:23 compute-0 python3.9[212078]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:23 compute-0 sudo[212076]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:24 compute-0 sudo[212154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxlxvjcveyodmzfzwpxulplyklfbzyuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066383.4029765-1130-173510113842578/AnsiballZ_file.py'
Nov 25 10:26:24 compute-0 sudo[212154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:24 compute-0 python3.9[212156]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:24 compute-0 sudo[212154]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:24 compute-0 sudo[212306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkqeiewmppomrmrdgsygflfgghosptgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066384.6013558-1143-115237731401631/AnsiballZ_command.py'
Nov 25 10:26:24 compute-0 sudo[212306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:25 compute-0 python3.9[212308]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:26:25 compute-0 sudo[212306]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:25 compute-0 sudo[212459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ungstogctoxlubqinsdgmxsdoszftbdv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066385.2329395-1151-223636748130233/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 10:26:25 compute-0 sudo[212459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:25 compute-0 python3[212461]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 10:26:25 compute-0 sudo[212459]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:26 compute-0 sudo[212611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdrbelqbppywkriwvibkmvgqapgysoep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066386.2031364-1159-149667694048467/AnsiballZ_stat.py'
Nov 25 10:26:26 compute-0 sudo[212611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:26 compute-0 python3.9[212613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:26 compute-0 sudo[212611]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:26 compute-0 sudo[212689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eruvqgadhxkojfntifoxyzxexshpxfti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066386.2031364-1159-149667694048467/AnsiballZ_file.py'
Nov 25 10:26:26 compute-0 sudo[212689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:27 compute-0 python3.9[212691]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:27 compute-0 sudo[212689]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:27 compute-0 sudo[212841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfinloomwpshmeyxgorqdjulbmarfjbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066387.3281302-1171-29405202623710/AnsiballZ_stat.py'
Nov 25 10:26:27 compute-0 sudo[212841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:27 compute-0 python3.9[212843]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:27 compute-0 sudo[212841]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:28 compute-0 sudo[212919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erxpwmnwxsbbzsupjgxcnvpeaoqedmnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066387.3281302-1171-29405202623710/AnsiballZ_file.py'
Nov 25 10:26:28 compute-0 sudo[212919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:28 compute-0 python3.9[212921]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:28 compute-0 sudo[212919]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:28 compute-0 sudo[213071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjahvfbmpjonhuloaosgzrbmhbvxrbxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066388.4840667-1183-187364200341190/AnsiballZ_stat.py'
Nov 25 10:26:28 compute-0 sudo[213071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:28 compute-0 podman[213073]: 2025-11-25 10:26:28.84779979 +0000 UTC m=+0.054166636 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:26:29 compute-0 python3.9[213074]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:29 compute-0 sudo[213071]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:29 compute-0 sudo[213169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeobcmjdiqpvgaplutjfvebzeyizfblu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066388.4840667-1183-187364200341190/AnsiballZ_file.py'
Nov 25 10:26:29 compute-0 sudo[213169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:29 compute-0 python3.9[213171]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:29 compute-0 sudo[213169]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:29 compute-0 sudo[213321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jasroetsxewxqkiovxymyziwamyrqypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066389.6967752-1195-211260393947894/AnsiballZ_stat.py'
Nov 25 10:26:29 compute-0 sudo[213321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:30 compute-0 python3.9[213323]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:30 compute-0 sudo[213321]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:30 compute-0 sudo[213399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uryvfywhojumfujheccgczchcitqhbvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066389.6967752-1195-211260393947894/AnsiballZ_file.py'
Nov 25 10:26:30 compute-0 sudo[213399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:30 compute-0 python3.9[213401]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:30 compute-0 sudo[213399]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:31 compute-0 sudo[213551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fddxwmfhxnbpzzcuwcojhnndjrfaowix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066390.8116577-1207-211983278234171/AnsiballZ_stat.py'
Nov 25 10:26:31 compute-0 sudo[213551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:31 compute-0 python3.9[213553]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:31 compute-0 sudo[213551]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:31 compute-0 sudo[213676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdypogkirmffcilcpnmysgcukksxstbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066390.8116577-1207-211983278234171/AnsiballZ_copy.py'
Nov 25 10:26:31 compute-0 sudo[213676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:31 compute-0 python3.9[213678]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066390.8116577-1207-211983278234171/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:31 compute-0 sudo[213676]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:32 compute-0 sudo[213828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixnpmtrfgrzyjgbcpsuvrjdvzuipstzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066392.1622946-1222-91200186854062/AnsiballZ_file.py'
Nov 25 10:26:32 compute-0 sudo[213828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:32 compute-0 python3.9[213830]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:32 compute-0 sudo[213828]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:33 compute-0 sudo[213980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqzzwchelvazqiovroakkdfxwqzlsmcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066392.781524-1230-84169716537882/AnsiballZ_command.py'
Nov 25 10:26:33 compute-0 sudo[213980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:33 compute-0 python3.9[213982]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:26:33 compute-0 sudo[213980]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:33 compute-0 podman[214098]: 2025-11-25 10:26:33.946135825 +0000 UTC m=+0.059808158 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:26:33 compute-0 sudo[214155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqxjdizdbmukbzxjpwobxxijcomvafeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066393.4869597-1238-70695846410035/AnsiballZ_blockinfile.py'
Nov 25 10:26:33 compute-0 sudo[214155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:34 compute-0 python3.9[214157]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:34 compute-0 sudo[214155]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:34 compute-0 sudo[214307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqqddvypuxcjfxqdkejrhplnirgqzemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066394.620962-1247-30845696128207/AnsiballZ_command.py'
Nov 25 10:26:34 compute-0 sudo[214307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:35 compute-0 python3.9[214309]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:26:35 compute-0 sudo[214307]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:35 compute-0 sudo[214460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyltwfacdfatyoqwzzekwhgzskinnxaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066395.2448976-1255-253065458793689/AnsiballZ_stat.py'
Nov 25 10:26:35 compute-0 sudo[214460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:35 compute-0 python3.9[214462]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:26:35 compute-0 sudo[214460]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:26:36.014 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:26:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:26:36.015 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:26:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:26:36.015 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:26:36 compute-0 sudo[214614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icrwajmvpcpcaabjywktapbwbozrgion ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066395.868238-1263-60410343175981/AnsiballZ_command.py'
Nov 25 10:26:36 compute-0 sudo[214614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:36 compute-0 python3.9[214616]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:26:36 compute-0 sudo[214614]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:36 compute-0 sudo[214769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wanacnpdxjcsxfxobprmrjdavgmjpheh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066396.5515141-1271-57832035215736/AnsiballZ_file.py'
Nov 25 10:26:36 compute-0 sudo[214769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:36 compute-0 podman[203557]: time="2025-11-25T10:26:36Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:26:36 compute-0 podman[203557]: @ - - [25/Nov/2025:10:26:36 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22541 "" "Go-http-client/1.1"
Nov 25 10:26:36 compute-0 podman[203557]: @ - - [25/Nov/2025:10:26:36 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3410 "" "Go-http-client/1.1"
Nov 25 10:26:37 compute-0 python3.9[214771]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:37 compute-0 sudo[214769]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:37 compute-0 sshd-session[189678]: Connection closed by 192.168.122.30 port 51786
Nov 25 10:26:37 compute-0 sshd-session[189675]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:26:37 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Nov 25 10:26:37 compute-0 systemd[1]: session-26.scope: Consumed 1min 34.593s CPU time.
Nov 25 10:26:37 compute-0 systemd-logind[822]: Session 26 logged out. Waiting for processes to exit.
Nov 25 10:26:37 compute-0 systemd-logind[822]: Removed session 26.
Nov 25 10:26:38 compute-0 openstack_network_exporter[205722]: ERROR   10:26:38 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:26:38 compute-0 openstack_network_exporter[205722]: ERROR   10:26:38 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:26:38 compute-0 openstack_network_exporter[205722]: ERROR   10:26:38 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:26:38 compute-0 openstack_network_exporter[205722]: ERROR   10:26:38 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:26:38 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:26:38 compute-0 openstack_network_exporter[205722]: ERROR   10:26:38 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:26:38 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:26:42 compute-0 podman[214805]: 2025-11-25 10:26:42.999717371 +0000 UTC m=+0.118818523 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:26:44 compute-0 sshd-session[214824]: Accepted publickey for zuul from 192.168.122.30 port 46596 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:26:44 compute-0 systemd-logind[822]: New session 27 of user zuul.
Nov 25 10:26:44 compute-0 systemd[1]: Started Session 27 of User zuul.
Nov 25 10:26:44 compute-0 sshd-session[214824]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:26:45 compute-0 sudo[214977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtabotnrhydqpjsnojpfdyohcvetmjiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066404.5892794-24-57864136249441/AnsiballZ_systemd_service.py'
Nov 25 10:26:45 compute-0 sudo[214977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:45 compute-0 python3.9[214979]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:26:45 compute-0 systemd[1]: Reloading.
Nov 25 10:26:45 compute-0 systemd-rc-local-generator[215006]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:26:45 compute-0 systemd-sysv-generator[215010]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:26:46 compute-0 sudo[214977]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:46 compute-0 podman[215138]: 2025-11-25 10:26:46.736356857 +0000 UTC m=+0.102973858 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:26:46 compute-0 python3.9[215176]: ansible-ansible.builtin.service_facts Invoked
Nov 25 10:26:46 compute-0 network[215205]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 25 10:26:46 compute-0 network[215206]: 'network-scripts' will be removed from distribution in near future.
Nov 25 10:26:46 compute-0 network[215207]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 25 10:26:47 compute-0 podman[215214]: 2025-11-25 10:26:47.948662965 +0000 UTC m=+0.083388826 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:26:48 compute-0 podman[215278]: 2025-11-25 10:26:48.648585601 +0000 UTC m=+0.069856017 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:26:50 compute-0 sudo[215525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzppnycmqhkngymnsmcdzmcgktdkkyra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066410.146413-47-226557895343633/AnsiballZ_systemd_service.py'
Nov 25 10:26:50 compute-0 sudo[215525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:50 compute-0 python3.9[215527]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:26:50 compute-0 sudo[215525]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:51 compute-0 sudo[215679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkxkqmywnpukwxaetantuneypejpeapo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066411.001317-57-148654883419480/AnsiballZ_file.py'
Nov 25 10:26:51 compute-0 sudo[215679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:51 compute-0 python3.9[215681]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:51 compute-0 sudo[215679]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:52 compute-0 sudo[215831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdmskjevphwskqybqeohpzdbjwjwywwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066411.8735948-65-43077076735218/AnsiballZ_file.py'
Nov 25 10:26:52 compute-0 sudo[215831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:52 compute-0 python3.9[215833]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:26:52 compute-0 sudo[215831]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:52 compute-0 sudo[215983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehxmrzjnmmwllqzrtudcheuhrfkuicy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066412.569648-74-115023686834373/AnsiballZ_command.py'
Nov 25 10:26:52 compute-0 sudo[215983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:53 compute-0 python3.9[215985]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:26:53 compute-0 sudo[215983]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:53 compute-0 podman[215988]: 2025-11-25 10:26:53.312275825 +0000 UTC m=+0.057798810 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 10:26:54 compute-0 python3.9[216158]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:26:54 compute-0 sudo[216308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojdshmikmfnyjabxzifqxjlhgiemudek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066414.3684566-92-53154766690699/AnsiballZ_systemd_service.py'
Nov 25 10:26:54 compute-0 sudo[216308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:54 compute-0 python3.9[216310]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:26:54 compute-0 systemd[1]: Reloading.
Nov 25 10:26:55 compute-0 systemd-sysv-generator[216341]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:26:55 compute-0 systemd-rc-local-generator[216335]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:26:55 compute-0 sudo[216308]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:55 compute-0 sudo[216495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rykfwfzvivgyawhbadxdiwdwypavlhvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066415.4255116-100-111685909556848/AnsiballZ_command.py'
Nov 25 10:26:55 compute-0 sudo[216495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:55 compute-0 python3.9[216497]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:26:55 compute-0 sudo[216495]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:56 compute-0 sudo[216648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzmgwxxgetgradnqbikufvaghcbmbdij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066416.1224346-109-234047900700937/AnsiballZ_file.py'
Nov 25 10:26:56 compute-0 sudo[216648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:56 compute-0 nova_compute[189381]: 2025-11-25 10:26:56.372 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:56 compute-0 python3.9[216650]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:26:56 compute-0 sudo[216648]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:57 compute-0 nova_compute[189381]: 2025-11-25 10:26:57.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:57 compute-0 nova_compute[189381]: 2025-11-25 10:26:57.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:57 compute-0 python3.9[216800]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:26:57 compute-0 python3.9[216952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.057 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.235 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.237 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5844MB free_disk=72.26188278198242GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.237 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.237 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.304 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.305 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.323 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.338 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.340 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:26:58 compute-0 nova_compute[189381]: 2025-11-25 10:26:58.340 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:26:58 compute-0 python3.9[217073]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066417.5116367-125-105179460158057/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:26:58 compute-0 podman[217098]: 2025-11-25 10:26:58.989971813 +0000 UTC m=+0.107949941 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:26:59 compute-0 sudo[217247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tucfpmvhzgbvfsgdyrwodvxwdqyqiikh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066418.9312685-143-280338906222617/AnsiballZ_getent.py'
Nov 25 10:26:59 compute-0 sudo[217247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:26:59 compute-0 nova_compute[189381]: 2025-11-25 10:26:59.340 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:26:59 compute-0 python3.9[217249]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 25 10:26:59 compute-0 sudo[217247]: pam_unix(sudo:session): session closed for user root
Nov 25 10:26:59 compute-0 podman[203557]: time="2025-11-25T10:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:26:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22541 "" "Go-http-client/1.1"
Nov 25 10:26:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3424 "" "Go-http-client/1.1"
Nov 25 10:27:00 compute-0 nova_compute[189381]: 2025-11-25 10:27:00.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:00 compute-0 nova_compute[189381]: 2025-11-25 10:27:00.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:27:00 compute-0 nova_compute[189381]: 2025-11-25 10:27:00.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:27:00 compute-0 nova_compute[189381]: 2025-11-25 10:27:00.035 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:27:00 compute-0 python3.9[217400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:01 compute-0 python3.9[217521]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066420.238689-171-138893684950424/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:01 compute-0 openstack_network_exporter[205722]: ERROR   10:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:27:01 compute-0 openstack_network_exporter[205722]: ERROR   10:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:27:01 compute-0 openstack_network_exporter[205722]: ERROR   10:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:27:01 compute-0 openstack_network_exporter[205722]: ERROR   10:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:27:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:27:01 compute-0 openstack_network_exporter[205722]: ERROR   10:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:27:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:27:01 compute-0 anacron[30897]: Job `cron.daily' started
Nov 25 10:27:01 compute-0 anacron[30897]: Job `cron.daily' terminated
Nov 25 10:27:01 compute-0 python3.9[217671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:02 compute-0 python3.9[217794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066421.4041097-171-63582788124868/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:03 compute-0 python3.9[217944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.324 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.325 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2402f49e80>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:27:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:27:03 compute-0 python3.9[218066]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066422.7218056-171-74883672811032/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:04 compute-0 podman[218190]: 2025-11-25 10:27:04.289530687 +0000 UTC m=+0.062398735 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:27:04 compute-0 python3.9[218230]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:27:05 compute-0 python3.9[218390]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:27:05 compute-0 python3.9[218542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:06 compute-0 python3.9[218663]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066425.2202785-230-40614104827867/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:06 compute-0 python3.9[218813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:07 compute-0 python3.9[218889]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:07 compute-0 python3.9[219039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:08 compute-0 python3.9[219160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066427.477095-230-146279789571043/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:08 compute-0 python3.9[219310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:09 compute-0 python3.9[219431]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066428.5529873-230-168726446249859/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:10 compute-0 python3.9[219581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:10 compute-0 python3.9[219702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066429.6541195-230-206604854670869/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:11 compute-0 python3.9[219852]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:11 compute-0 python3.9[219973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066430.7412598-230-72187656402884/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:12 compute-0 python3.9[220123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:12 compute-0 python3.9[220199]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:13 compute-0 sudo[220365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxrxmykluhpjdqozhjqrlxhyopivpgav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066432.9937937-325-145115511619468/AnsiballZ_file.py'
Nov 25 10:27:13 compute-0 sudo[220365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:13 compute-0 podman[220323]: 2025-11-25 10:27:13.273300589 +0000 UTC m=+0.047470861 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:27:13 compute-0 python3.9[220370]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:13 compute-0 sudo[220365]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:14 compute-0 sudo[220520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-secrjjnbvsuwuurjegytnqrwwujgahfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066433.6389465-333-69296082841809/AnsiballZ_file.py'
Nov 25 10:27:14 compute-0 sudo[220520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:14 compute-0 python3.9[220522]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:14 compute-0 sudo[220520]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:14 compute-0 sudo[220672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybkpivlclarhwjimxtpbuxhzdvygvzag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066434.4244704-341-37122450239173/AnsiballZ_file.py'
Nov 25 10:27:14 compute-0 sudo[220672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:14 compute-0 python3.9[220674]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:27:14 compute-0 sudo[220672]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:15 compute-0 sudo[220824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxsromzttgpmjrmnvrrqrwlujybrnbtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066435.0651262-349-51196143529547/AnsiballZ_stat.py'
Nov 25 10:27:15 compute-0 sudo[220824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:15 compute-0 python3.9[220826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:15 compute-0 sudo[220824]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:15 compute-0 sudo[220947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fepdfwsgyavftklrnynbgdouysaxdmyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066435.0651262-349-51196143529547/AnsiballZ_copy.py'
Nov 25 10:27:15 compute-0 sudo[220947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:16 compute-0 python3.9[220949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066435.0651262-349-51196143529547/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:27:16 compute-0 sudo[220947]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:16 compute-0 sudo[221023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pagnjlxgmgyvmxtrrgmtpgoqseriquwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066435.0651262-349-51196143529547/AnsiballZ_stat.py'
Nov 25 10:27:16 compute-0 sudo[221023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:16 compute-0 python3.9[221025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:16 compute-0 sudo[221023]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:16 compute-0 sudo[221161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikzyicjadulqxfdlwsextdibayjttcpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066435.0651262-349-51196143529547/AnsiballZ_copy.py'
Nov 25 10:27:16 compute-0 sudo[221161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:16 compute-0 podman[221120]: 2025-11-25 10:27:16.899401931 +0000 UTC m=+0.047938854 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:27:17 compute-0 python3.9[221172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066435.0651262-349-51196143529547/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:27:17 compute-0 sudo[221161]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:17 compute-0 sudo[221322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raohgjshdkijlfsgxqshuebkwgxvrldq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066437.2538643-349-155856825987348/AnsiballZ_stat.py'
Nov 25 10:27:17 compute-0 sudo[221322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:17 compute-0 python3.9[221324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:27:17 compute-0 sudo[221322]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:18 compute-0 sudo[221460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqjrjnupsubdgothmnvqcanyfkgangxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066437.2538643-349-155856825987348/AnsiballZ_copy.py'
Nov 25 10:27:18 compute-0 sudo[221460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:18 compute-0 podman[221419]: 2025-11-25 10:27:18.07522398 +0000 UTC m=+0.054916838 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Nov 25 10:27:18 compute-0 python3.9[221468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764066437.2538643-349-155856825987348/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 25 10:27:18 compute-0 sudo[221460]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:18 compute-0 sudo[221634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axzljjojckcqnackbxgswbmgiloazixr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066438.5479293-391-226146871610316/AnsiballZ_container_config_data.py'
Nov 25 10:27:18 compute-0 sudo[221634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:18 compute-0 podman[221591]: 2025-11-25 10:27:18.978297127 +0000 UTC m=+0.089929886 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 10:27:19 compute-0 python3.9[221640]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Nov 25 10:27:19 compute-0 sudo[221634]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:19 compute-0 sudo[221796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddqgaggxahjpvroohryaagmxlcqjgkvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066439.3684514-400-95056880435772/AnsiballZ_container_config_hash.py'
Nov 25 10:27:19 compute-0 sudo[221796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:19 compute-0 python3.9[221798]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:27:20 compute-0 sudo[221796]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:20 compute-0 sudo[221948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paqcpdgdvqrrblogkmekiiwzpsgsgvsy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066440.2917159-410-274795351433096/AnsiballZ_edpm_container_manage.py'
Nov 25 10:27:20 compute-0 sudo[221948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:21 compute-0 python3[221950]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:27:21 compute-0 podman[221986]: 2025-11-25 10:27:21.301665502 +0000 UTC m=+0.047880283 container create 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:27:21 compute-0 podman[221986]: 2025-11-25 10:27:21.278447657 +0000 UTC m=+0.024662458 image pull 02e0056780c6b31017996766cd13000137ba644dac3fc851da034db8cf4ceb2c quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 25 10:27:21 compute-0 python3[221950]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Nov 25 10:27:21 compute-0 sudo[221948]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:21 compute-0 sudo[222172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijbjqhvgbrphvgbdtywccfmmkijfatev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066441.5925303-418-152467304001447/AnsiballZ_stat.py'
Nov 25 10:27:21 compute-0 sudo[222172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:22 compute-0 python3.9[222174]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:27:22 compute-0 sudo[222172]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:22 compute-0 sudo[222326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plqlhvvxpmlnqwsilljbnyedmpikttsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066442.3294375-427-99969711489961/AnsiballZ_file.py'
Nov 25 10:27:22 compute-0 sudo[222326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:22 compute-0 python3.9[222328]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:22 compute-0 sudo[222326]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:23 compute-0 sudo[222477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wamymhesjvxaydpydroztpopplfxaein ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066442.8680232-427-101941917502105/AnsiballZ_copy.py'
Nov 25 10:27:23 compute-0 sudo[222477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:23 compute-0 python3.9[222479]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066442.8680232-427-101941917502105/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:23 compute-0 sudo[222477]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:23 compute-0 podman[222480]: 2025-11-25 10:27:23.937656995 +0000 UTC m=+0.058252366 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 25 10:27:24 compute-0 sudo[222574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfvvxexgsqmoyqkpilgfihbkcjtepodf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066442.8680232-427-101941917502105/AnsiballZ_systemd.py'
Nov 25 10:27:24 compute-0 sudo[222574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:24 compute-0 python3.9[222576]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:27:24 compute-0 systemd[1]: Reloading.
Nov 25 10:27:24 compute-0 systemd-sysv-generator[222608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:27:24 compute-0 systemd-rc-local-generator[222604]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:27:24 compute-0 sudo[222574]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:25 compute-0 sudo[222686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpfvqaqdybdarteuyehlgivosdzubisr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066442.8680232-427-101941917502105/AnsiballZ_systemd.py'
Nov 25 10:27:25 compute-0 sudo[222686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:25 compute-0 python3.9[222688]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:27:25 compute-0 systemd[1]: Reloading.
Nov 25 10:27:25 compute-0 systemd-rc-local-generator[222718]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:27:25 compute-0 systemd-sysv-generator[222721]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:27:25 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 25 10:27:25 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:25 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.
Nov 25 10:27:25 compute-0 podman[222728]: 2025-11-25 10:27:25.821098398 +0000 UTC m=+0.140181147 container init 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + sudo -E kolla_set_configs
Nov 25 10:27:25 compute-0 podman[222728]: 2025-11-25 10:27:25.84491612 +0000 UTC m=+0.163998879 container start 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 10:27:25 compute-0 podman[222728]: ceilometer_agent_ipmi
Nov 25 10:27:25 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 25 10:27:25 compute-0 sudo[222749]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 10:27:25 compute-0 sudo[222749]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:27:25 compute-0 sudo[222749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:27:25 compute-0 sudo[222686]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Validating config file
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Copying service configuration files
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: INFO:__main__:Writing out command to execute
Nov 25 10:27:25 compute-0 sudo[222749]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:25 compute-0 podman[222750]: 2025-11-25 10:27:25.920441266 +0000 UTC m=+0.063554049 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: ++ cat /run_command
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + ARGS=
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + sudo kolla_copy_cacerts
Nov 25 10:27:25 compute-0 systemd[1]: 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-7d0bc2dd8869c584.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:27:25 compute-0 systemd[1]: 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-7d0bc2dd8869c584.service: Failed with result 'exit-code'.
Nov 25 10:27:25 compute-0 sudo[222782]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 10:27:25 compute-0 sudo[222782]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:27:25 compute-0 sudo[222782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:27:25 compute-0 sudo[222782]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + [[ ! -n '' ]]
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + . kolla_extend_start
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + umask 0022
Nov 25 10:27:25 compute-0 ceilometer_agent_ipmi[222743]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 25 10:27:26 compute-0 sudo[222922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mthrfjqbjhquahltxmtczjwmbwtetlrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066446.1854603-453-50800039803777/AnsiballZ_container_config_data.py'
Nov 25 10:27:26 compute-0 sudo[222922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:26 compute-0 python3.9[222924]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Nov 25 10:27:26 compute-0 sudo[222922]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.848 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.848 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.848 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.848 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.849 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.850 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.851 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.852 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.853 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.854 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.855 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.856 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.857 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.858 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.859 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.860 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.861 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.862 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.863 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.864 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.864 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.864 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.864 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.864 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.864 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.887 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.889 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 10:27:26 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:26.890 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.030 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp12xaz5n3/privsep.sock']
Nov 25 10:27:27 compute-0 sudo[223009]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp12xaz5n3/privsep.sock
Nov 25 10:27:27 compute-0 sudo[223009]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:27:27 compute-0 sudo[223009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:27:27 compute-0 sudo[223081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uekblejrunlxzxqmyzluhefrcsgkrbdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066446.9434488-462-213567793225251/AnsiballZ_container_config_hash.py'
Nov 25 10:27:27 compute-0 sudo[223081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:27 compute-0 python3.9[223083]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 25 10:27:27 compute-0 sudo[223081]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:27 compute-0 sudo[223009]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.703 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.703 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp12xaz5n3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.572 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.577 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.582 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.583 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.808 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.808 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.809 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.809 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.809 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.809 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.809 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.810 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.810 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.810 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.810 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.810 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.810 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.813 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.814 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.814 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.814 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.814 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.814 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.814 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.814 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.815 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.816 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.817 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.818 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.819 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.820 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.821 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.822 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.823 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.824 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.825 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.826 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.827 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.828 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.829 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.830 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.831 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.832 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 25 10:27:27 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:27.834 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 25 10:27:28 compute-0 sudo[223239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoxbjyqeenqqhunltoyixflxsmjtbbuu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066447.8077788-472-163296787813300/AnsiballZ_edpm_container_manage.py'
Nov 25 10:27:28 compute-0 sudo[223239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:28 compute-0 python3[223241]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Nov 25 10:27:28 compute-0 podman[223278]: 2025-11-25 10:27:28.598954986 +0000 UTC m=+0.105325492 container create ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.4, name=ubi9, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, managed_by=edpm_ansible)
Nov 25 10:27:28 compute-0 podman[223278]: 2025-11-25 10:27:28.521578907 +0000 UTC m=+0.027949433 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 25 10:27:28 compute-0 python3[223241]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Nov 25 10:27:28 compute-0 sudo[223239]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:29 compute-0 sudo[223477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkedijanvnuimfendcarcgqcioqwihjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066448.8987875-480-200781322682626/AnsiballZ_stat.py'
Nov 25 10:27:29 compute-0 sudo[223477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:29 compute-0 podman[223440]: 2025-11-25 10:27:29.192924398 +0000 UTC m=+0.065609359 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:27:29 compute-0 python3.9[223485]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:27:29 compute-0 sudo[223477]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:29 compute-0 podman[203557]: time="2025-11-25T10:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:27:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28285 "" "Go-http-client/1.1"
Nov 25 10:27:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3851 "" "Go-http-client/1.1"
Nov 25 10:27:29 compute-0 sudo[223644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdgbbarchmaqwpwygcyysxeroghuaipi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066449.674284-489-273031151254337/AnsiballZ_file.py'
Nov 25 10:27:29 compute-0 sudo[223644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:30 compute-0 python3.9[223646]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:30 compute-0 sudo[223644]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:30 compute-0 sudo[223795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvdvoonzennaamtyypcjflgezehqnrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066450.2641146-489-249962950079165/AnsiballZ_copy.py'
Nov 25 10:27:30 compute-0 sudo[223795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:30 compute-0 python3.9[223797]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764066450.2641146-489-249962950079165/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:30 compute-0 sudo[223795]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:31 compute-0 sudo[223871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpbkdhpwtmuyshwuqninhhnfhcbtfevf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066450.2641146-489-249962950079165/AnsiballZ_systemd.py'
Nov 25 10:27:31 compute-0 sudo[223871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:31 compute-0 openstack_network_exporter[205722]: ERROR   10:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:27:31 compute-0 openstack_network_exporter[205722]: ERROR   10:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:27:31 compute-0 openstack_network_exporter[205722]: ERROR   10:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:27:31 compute-0 openstack_network_exporter[205722]: ERROR   10:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:27:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:27:31 compute-0 openstack_network_exporter[205722]: ERROR   10:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:27:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:27:31 compute-0 python3.9[223873]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 25 10:27:31 compute-0 systemd[1]: Reloading.
Nov 25 10:27:31 compute-0 systemd-rc-local-generator[223897]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:27:31 compute-0 systemd-sysv-generator[223902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:27:31 compute-0 sudo[223871]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:32 compute-0 sudo[223982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdzukddueiqujmyqijilvdvxmwedevrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066450.2641146-489-249962950079165/AnsiballZ_systemd.py'
Nov 25 10:27:32 compute-0 sudo[223982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:32 compute-0 python3.9[223984]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 25 10:27:32 compute-0 systemd[1]: Reloading.
Nov 25 10:27:32 compute-0 systemd-rc-local-generator[224011]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 25 10:27:32 compute-0 systemd-sysv-generator[224017]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 25 10:27:33 compute-0 systemd[1]: Starting kepler container...
Nov 25 10:27:33 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:27:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.
Nov 25 10:27:33 compute-0 podman[224024]: 2025-11-25 10:27:33.199986926 +0000 UTC m=+0.122279937 container init ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 25 10:27:33 compute-0 podman[224024]: 2025-11-25 10:27:33.22213884 +0000 UTC m=+0.144431831 container start ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., vcs-type=git, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Nov 25 10:27:33 compute-0 podman[224024]: kepler
Nov 25 10:27:33 compute-0 systemd[1]: Started kepler container.
Nov 25 10:27:33 compute-0 kepler[224039]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.267967       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.268189       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.268223       1 config.go:295] kernel version: 5.14
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.269749       1 power.go:78] Unable to obtain power, use estimate method
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.269794       1 redfish.go:169] failed to get redfish credential file path
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.270355       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.270368       1 power.go:79] using none to obtain power
Nov 25 10:27:33 compute-0 kepler[224039]: E1125 10:27:33.270387       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 25 10:27:33 compute-0 kepler[224039]: E1125 10:27:33.270417       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 25 10:27:33 compute-0 sudo[223982]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:33 compute-0 kepler[224039]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.272651       1 exporter.go:84] Number of CPUs: 8
Nov 25 10:27:33 compute-0 podman[224044]: 2025-11-25 10:27:33.335355282 +0000 UTC m=+0.102331187 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, vcs-type=git)
Nov 25 10:27:33 compute-0 systemd[1]: ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34-527fa713e10b5dc5.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:27:33 compute-0 systemd[1]: ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34-527fa713e10b5dc5.service: Failed with result 'exit-code'.
Nov 25 10:27:33 compute-0 sudo[224222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpbbfwxtwjjzzgqagtmeqetdcxagbaoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066453.4652293-513-107877372973538/AnsiballZ_systemd.py'
Nov 25 10:27:33 compute-0 sudo[224222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.842599       1 watcher.go:83] Using in cluster k8s config
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.842655       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 25 10:27:33 compute-0 kepler[224039]: E1125 10:27:33.842744       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.846520       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.846593       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.850612       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.850646       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.859520       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.859645       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.859665       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.867897       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.867933       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.867937       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.867942       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.867947       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.867961       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.868042       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.868072       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.868095       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.868112       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.869014       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 25 10:27:33 compute-0 kepler[224039]: I1125 10:27:33.869433       1 exporter.go:208] Started Kepler in 601.795948ms
Nov 25 10:27:34 compute-0 python3.9[224224]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:27:34 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 25 10:27:34 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:34.224 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 25 10:27:34 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:34.326 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 25 10:27:34 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:34.327 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 25 10:27:34 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:34.327 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 25 10:27:34 compute-0 ceilometer_agent_ipmi[222743]: 2025-11-25 10:27:34.337 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 25 10:27:34 compute-0 systemd[1]: libpod-8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.scope: Deactivated successfully.
Nov 25 10:27:34 compute-0 systemd[1]: libpod-8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.scope: Consumed 2.303s CPU time.
Nov 25 10:27:34 compute-0 podman[224238]: 2025-11-25 10:27:34.53041812 +0000 UTC m=+0.361834752 container died 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 10:27:34 compute-0 systemd[1]: 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-7d0bc2dd8869c584.timer: Deactivated successfully.
Nov 25 10:27:34 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.
Nov 25 10:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-userdata-shm.mount: Deactivated successfully.
Nov 25 10:27:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be-merged.mount: Deactivated successfully.
Nov 25 10:27:34 compute-0 podman[224238]: 2025-11-25 10:27:34.605972066 +0000 UTC m=+0.437388688 container cleanup 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:27:34 compute-0 podman[224238]: ceilometer_agent_ipmi
Nov 25 10:27:34 compute-0 podman[224255]: 2025-11-25 10:27:34.641217511 +0000 UTC m=+0.086008812 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:27:34 compute-0 podman[224282]: ceilometer_agent_ipmi
Nov 25 10:27:34 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 25 10:27:34 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 25 10:27:34 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 25 10:27:34 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef08056d50a5db89e86625d916cd7d0064cef6c5b0f560bf93750f93528c7be/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 25 10:27:34 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.
Nov 25 10:27:34 compute-0 podman[224294]: 2025-11-25 10:27:34.872834706 +0000 UTC m=+0.148919991 container init 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 10:27:34 compute-0 ceilometer_agent_ipmi[224308]: + sudo -E kolla_set_configs
Nov 25 10:27:34 compute-0 podman[224294]: 2025-11-25 10:27:34.903856298 +0000 UTC m=+0.179941553 container start 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:27:34 compute-0 podman[224294]: ceilometer_agent_ipmi
Nov 25 10:27:34 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 25 10:27:34 compute-0 sudo[224315]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 25 10:27:34 compute-0 sudo[224315]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:27:34 compute-0 sudo[224315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:27:34 compute-0 sudo[224222]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Validating config file
Nov 25 10:27:35 compute-0 podman[224314]: 2025-11-25 10:27:35.016007489 +0000 UTC m=+0.097961900 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Copying service configuration files
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: INFO:__main__:Writing out command to execute
Nov 25 10:27:35 compute-0 systemd[1]: 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-63f73fb59966c878.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:27:35 compute-0 systemd[1]: 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-63f73fb59966c878.service: Failed with result 'exit-code'.
Nov 25 10:27:35 compute-0 sudo[224315]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: ++ cat /run_command
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + ARGS=
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + sudo kolla_copy_cacerts
Nov 25 10:27:35 compute-0 sudo[224354]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 25 10:27:35 compute-0 sudo[224354]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:27:35 compute-0 sudo[224354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:27:35 compute-0 sudo[224354]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + [[ ! -n '' ]]
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + . kolla_extend_start
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + umask 0022
Nov 25 10:27:35 compute-0 ceilometer_agent_ipmi[224308]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 25 10:27:35 compute-0 sudo[224488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ownfeajbkcmcvoohjlmyrqahakpttzig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066455.1787338-521-152437052950832/AnsiballZ_systemd.py'
Nov 25 10:27:35 compute-0 sudo[224488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:35 compute-0 python3.9[224490]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:27:35 compute-0 systemd[1]: Stopping kepler container...
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.016 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.016 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.016 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.017 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.018 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.018 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.018 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.018 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.018 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.018 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 kepler[224039]: I1125 10:27:36.019067       1 exporter.go:218] Received shutdown signal
Nov 25 10:27:36 compute-0 kepler[224039]: I1125 10:27:36.019745       1 exporter.go:226] Exiting...
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.020 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.020 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.020 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.020 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.020 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.020 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.021 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.022 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.023 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.023 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.023 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.023 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:27:36.021 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:27:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:27:36.023 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.023 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.023 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.023 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.024 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.025 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.026 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.027 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.028 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.029 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.030 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.031 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.031 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.031 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.031 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.031 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.031 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.031 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.032 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.033 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:27:36.023 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.034 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.035 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.036 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.037 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.037 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.057 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.058 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.060 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.073 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmph4z0y5me/privsep.sock']
Nov 25 10:27:36 compute-0 sudo[224511]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmph4z0y5me/privsep.sock
Nov 25 10:27:36 compute-0 sudo[224511]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 25 10:27:36 compute-0 sudo[224511]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Nov 25 10:27:36 compute-0 systemd[1]: libpod-ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.scope: Deactivated successfully.
Nov 25 10:27:36 compute-0 podman[224494]: 2025-11-25 10:27:36.199617594 +0000 UTC m=+0.229746432 container died ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Nov 25 10:27:36 compute-0 systemd[1]: ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34-527fa713e10b5dc5.timer: Deactivated successfully.
Nov 25 10:27:36 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.
Nov 25 10:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34-userdata-shm.mount: Deactivated successfully.
Nov 25 10:27:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a418f305cbf227ccd84e31523aedbed668b9ae88ebbb23028d8ecde37609b21-merged.mount: Deactivated successfully.
Nov 25 10:27:36 compute-0 podman[224494]: 2025-11-25 10:27:36.239273977 +0000 UTC m=+0.269402805 container cleanup ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, release=1214.1726694543, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:27:36 compute-0 podman[224494]: kepler
Nov 25 10:27:36 compute-0 podman[224525]: kepler
Nov 25 10:27:36 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Nov 25 10:27:36 compute-0 systemd[1]: Stopped kepler container.
Nov 25 10:27:36 compute-0 systemd[1]: Starting kepler container...
Nov 25 10:27:36 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:27:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.
Nov 25 10:27:36 compute-0 podman[224535]: 2025-11-25 10:27:36.46916159 +0000 UTC m=+0.128173477 container init ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, container_name=kepler, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:27:36 compute-0 kepler[224552]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 10:27:36 compute-0 podman[224535]: 2025-11-25 10:27:36.497227276 +0000 UTC m=+0.156239143 container start ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, name=ubi9)
Nov 25 10:27:36 compute-0 podman[224535]: kepler
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.503779       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.503942       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.503967       1 config.go:295] kernel version: 5.14
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.504533       1 power.go:78] Unable to obtain power, use estimate method
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.504615       1 redfish.go:169] failed to get redfish credential file path
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.505133       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.505153       1 power.go:79] using none to obtain power
Nov 25 10:27:36 compute-0 kepler[224552]: E1125 10:27:36.505174       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 25 10:27:36 compute-0 kepler[224552]: E1125 10:27:36.505203       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 25 10:27:36 compute-0 kepler[224552]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 25 10:27:36 compute-0 systemd[1]: Started kepler container.
Nov 25 10:27:36 compute-0 kepler[224552]: I1125 10:27:36.507810       1 exporter.go:84] Number of CPUs: 8
Nov 25 10:27:36 compute-0 sudo[224488]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:36 compute-0 podman[224562]: 2025-11-25 10:27:36.580343713 +0000 UTC m=+0.071302905 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:27:36 compute-0 systemd[1]: ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34-42016392e2036766.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:27:36 compute-0 systemd[1]: ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34-42016392e2036766.service: Failed with result 'exit-code'.
Nov 25 10:27:36 compute-0 sudo[224511]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.755 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.756 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmph4z0y5me/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.624 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.628 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.631 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.631 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.895 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.895 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.896 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.896 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.896 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.896 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.896 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.896 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.897 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.897 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.897 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.897 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.897 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.900 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.906 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.907 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.908 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.909 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.910 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.911 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.911 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.911 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.911 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.913 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.913 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.913 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.913 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.914 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.915 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.916 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.917 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.917 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.917 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.918 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.919 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.920 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.921 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.922 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.923 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.924 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.925 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.926 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.927 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.928 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.929 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.930 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.931 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.932 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.932 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.932 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.932 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 25 10:27:36 compute-0 ceilometer_agent_ipmi[224308]: 2025-11-25 10:27:36.934 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 25 10:27:36 compute-0 sudo[224740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmjmiigwklmnyuzplldstattgatbgjjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066456.7100406-529-217925499783850/AnsiballZ_find.py'
Nov 25 10:27:36 compute-0 sudo[224740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.114476       1 watcher.go:83] Using in cluster k8s config
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.114515       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 25 10:27:37 compute-0 kepler[224552]: E1125 10:27:37.114608       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.118631       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.118660       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.122816       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.122859       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.130300       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.130348       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.130368       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137159       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137191       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137196       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137200       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137205       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137216       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137490       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137514       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137532       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137596       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.137697       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 25 10:27:37 compute-0 kepler[224552]: I1125 10:27:37.138760       1 exporter.go:208] Started Kepler in 635.291662ms
Nov 25 10:27:37 compute-0 python3.9[224742]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 25 10:27:37 compute-0 sudo[224740]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:38 compute-0 sudo[224902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlyxtauaygvzczxuhaftlchfdsbgsgnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066457.680479-539-148630392886478/AnsiballZ_podman_container_info.py'
Nov 25 10:27:38 compute-0 sudo[224902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:38 compute-0 python3.9[224904]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 25 10:27:38 compute-0 sudo[224902]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:39 compute-0 sudo[225068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eekddxhwfidevdvrpmrivdrwxfepcaec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066458.7752886-547-266384902931532/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:39 compute-0 sudo[225068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:39 compute-0 python3.9[225070]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:39 compute-0 systemd[1]: Started libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope.
Nov 25 10:27:39 compute-0 podman[225071]: 2025-11-25 10:27:39.672386618 +0000 UTC m=+0.130068353 container exec 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:27:39 compute-0 podman[225071]: 2025-11-25 10:27:39.70718871 +0000 UTC m=+0.164870435 container exec_died 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:27:39 compute-0 sudo[225068]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:39 compute-0 systemd[1]: libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope: Deactivated successfully.
Nov 25 10:27:40 compute-0 sudo[225251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzbbbnxrxrgyglidykwyqekjscvsyiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066459.9634595-555-266444087141565/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:40 compute-0 sudo[225251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:40 compute-0 python3.9[225253]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:40 compute-0 systemd[1]: Started libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope.
Nov 25 10:27:40 compute-0 podman[225254]: 2025-11-25 10:27:40.63686593 +0000 UTC m=+0.111715659 container exec 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 25 10:27:40 compute-0 podman[225254]: 2025-11-25 10:27:40.669786627 +0000 UTC m=+0.144636356 container exec_died 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 10:27:40 compute-0 systemd[1]: libpod-conmon-5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022.scope: Deactivated successfully.
Nov 25 10:27:40 compute-0 sudo[225251]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:41 compute-0 sudo[225434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeklvyubzqejqfelwbkbzrxkhifsjgud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066461.075382-563-158514937333004/AnsiballZ_file.py'
Nov 25 10:27:41 compute-0 sudo[225434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:41 compute-0 python3.9[225436]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:41 compute-0 sudo[225434]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:42 compute-0 sudo[225586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcfqnoapamavepeilsssimaixowfeeuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066461.8702376-572-201957694668615/AnsiballZ_podman_container_info.py'
Nov 25 10:27:42 compute-0 sudo[225586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:42 compute-0 python3.9[225588]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 25 10:27:42 compute-0 sudo[225586]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:42 compute-0 sudo[225751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaeehainvyjsxdwodwmjwjuztdgwjyeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066462.6588113-580-163385003190350/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:42 compute-0 sudo[225751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:43 compute-0 python3.9[225753]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:43 compute-0 systemd[1]: Started libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope.
Nov 25 10:27:43 compute-0 podman[225757]: 2025-11-25 10:27:43.291189948 +0000 UTC m=+0.084815077 container exec 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 25 10:27:43 compute-0 podman[225757]: 2025-11-25 10:27:43.321923432 +0000 UTC m=+0.115548551 container exec_died 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 10:27:43 compute-0 systemd[1]: libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope: Deactivated successfully.
Nov 25 10:27:43 compute-0 sudo[225751]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:43 compute-0 podman[225786]: 2025-11-25 10:27:43.448976326 +0000 UTC m=+0.069107500 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:27:43 compute-0 sudo[225953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjfwsnywsntupeesmdbhxbnlsfnoyrvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066463.5533924-588-138059685039107/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:43 compute-0 sudo[225953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:44 compute-0 python3.9[225955]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:44 compute-0 systemd[1]: Started libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope.
Nov 25 10:27:44 compute-0 podman[225956]: 2025-11-25 10:27:44.222357271 +0000 UTC m=+0.096562719 container exec 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:27:44 compute-0 podman[225956]: 2025-11-25 10:27:44.255091842 +0000 UTC m=+0.129297290 container exec_died 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:27:44 compute-0 systemd[1]: libpod-conmon-1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15.scope: Deactivated successfully.
Nov 25 10:27:44 compute-0 sudo[225953]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:44 compute-0 sudo[226136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wellykjxubortkznlfdcxuqrxeqyunjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066464.4963346-596-165677820296247/AnsiballZ_file.py'
Nov 25 10:27:44 compute-0 sudo[226136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:45 compute-0 python3.9[226138]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:45 compute-0 sudo[226136]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:45 compute-0 sudo[226288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwaceawyaqmpgzvhpikumwggahmfzszj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066465.3909938-605-182858473315701/AnsiballZ_podman_container_info.py'
Nov 25 10:27:45 compute-0 sudo[226288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:45 compute-0 python3.9[226290]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 25 10:27:46 compute-0 sudo[226288]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:46 compute-0 sudo[226452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffvszmwbueeybqzmsalxhpvsfoicjrbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066466.208063-613-132012147898173/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:46 compute-0 sudo[226452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:46 compute-0 python3.9[226454]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:46 compute-0 systemd[1]: Started libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope.
Nov 25 10:27:46 compute-0 podman[226455]: 2025-11-25 10:27:46.871479017 +0000 UTC m=+0.084241501 container exec b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 10:27:46 compute-0 podman[226455]: 2025-11-25 10:27:46.903711834 +0000 UTC m=+0.116474298 container exec_died b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 10:27:46 compute-0 systemd[1]: libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope: Deactivated successfully.
Nov 25 10:27:46 compute-0 sudo[226452]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:47 compute-0 podman[226485]: 2025-11-25 10:27:47.029683337 +0000 UTC m=+0.064254090 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:27:47 compute-0 sudo[226656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrjdtwjwmkubjpfeozwcpyxgltdtwbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066467.121496-621-175452575275869/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:47 compute-0 sudo[226656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:47 compute-0 python3.9[226658]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:47 compute-0 systemd[1]: Started libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope.
Nov 25 10:27:47 compute-0 podman[226659]: 2025-11-25 10:27:47.792781734 +0000 UTC m=+0.089336419 container exec b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 10:27:47 compute-0 podman[226659]: 2025-11-25 10:27:47.840478971 +0000 UTC m=+0.137033656 container exec_died b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:27:47 compute-0 systemd[1]: libpod-conmon-b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8.scope: Deactivated successfully.
Nov 25 10:27:47 compute-0 sudo[226656]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:48 compute-0 sudo[226851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iobveldmvrzqrhbgsgxalurmftojnjuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066468.2458518-629-42462889188476/AnsiballZ_file.py'
Nov 25 10:27:48 compute-0 sudo[226851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:48 compute-0 podman[226812]: 2025-11-25 10:27:48.645029544 +0000 UTC m=+0.095570090 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 10:27:48 compute-0 python3.9[226859]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:48 compute-0 sudo[226851]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:49 compute-0 sudo[227026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-watlmyadvsvcsuckjnjadfjbxwuzwoip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066469.0708003-638-13713433393869/AnsiballZ_podman_container_info.py'
Nov 25 10:27:49 compute-0 sudo[227026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:49 compute-0 podman[226983]: 2025-11-25 10:27:49.48270628 +0000 UTC m=+0.101300276 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller)
Nov 25 10:27:49 compute-0 python3.9[227032]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 25 10:27:49 compute-0 sudo[227026]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:50 compute-0 sudo[227199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znyabwjdiyqmahceuyocxbcglunwdaut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066469.975946-646-33810157767355/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:50 compute-0 sudo[227199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:50 compute-0 python3.9[227201]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:50 compute-0 systemd[1]: Started libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope.
Nov 25 10:27:50 compute-0 podman[227202]: 2025-11-25 10:27:50.649798885 +0000 UTC m=+0.112750539 container exec 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 10:27:50 compute-0 podman[227202]: 2025-11-25 10:27:50.680397945 +0000 UTC m=+0.143349569 container exec_died 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 10:27:50 compute-0 systemd[1]: libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope: Deactivated successfully.
Nov 25 10:27:50 compute-0 sudo[227199]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:51 compute-0 sudo[227382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgtibrgehgeqenmfpavboxdqfimbfzfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066470.9054332-654-106188229976322/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:51 compute-0 sudo[227382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:51 compute-0 python3.9[227384]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:51 compute-0 systemd[1]: Started libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope.
Nov 25 10:27:51 compute-0 podman[227385]: 2025-11-25 10:27:51.572870863 +0000 UTC m=+0.082038496 container exec 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251118)
Nov 25 10:27:51 compute-0 podman[227385]: 2025-11-25 10:27:51.605291986 +0000 UTC m=+0.114459619 container exec_died 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute)
Nov 25 10:27:51 compute-0 systemd[1]: libpod-conmon-11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d.scope: Deactivated successfully.
Nov 25 10:27:51 compute-0 sudo[227382]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:52 compute-0 sudo[227567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntsxqphhkjamwwecvigqcveahtpecogy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066471.88591-662-3527404345985/AnsiballZ_file.py'
Nov 25 10:27:52 compute-0 sudo[227567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:52 compute-0 python3.9[227569]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:52 compute-0 sudo[227567]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:53 compute-0 sudo[227719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imsyvkjrmrfitktjrlxrcfpjsrzfbjnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066472.7692807-671-140290033959822/AnsiballZ_podman_container_info.py'
Nov 25 10:27:53 compute-0 sudo[227719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:53 compute-0 python3.9[227721]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 25 10:27:53 compute-0 sudo[227719]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:53 compute-0 sudo[227882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucbuqqansigqxvapxbzuxgzhnjayyrdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066473.5872295-679-92532141585049/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:53 compute-0 sudo[227882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:54 compute-0 python3.9[227884]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:54 compute-0 systemd[1]: Started libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope.
Nov 25 10:27:54 compute-0 podman[227885]: 2025-11-25 10:27:54.248202952 +0000 UTC m=+0.101045489 container exec 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:27:54 compute-0 podman[227885]: 2025-11-25 10:27:54.294056904 +0000 UTC m=+0.146899441 container exec_died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:27:54 compute-0 podman[227900]: 2025-11-25 10:27:54.322566003 +0000 UTC m=+0.079895353 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 10:27:54 compute-0 systemd[1]: libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope: Deactivated successfully.
Nov 25 10:27:54 compute-0 sudo[227882]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:54 compute-0 sudo[228080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aigdzcietpbasvcculhikpdnobjtyvdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066474.5433297-687-247436021377238/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:54 compute-0 sudo[228080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:55 compute-0 python3.9[228082]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:55 compute-0 systemd[1]: Started libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope.
Nov 25 10:27:55 compute-0 podman[228083]: 2025-11-25 10:27:55.269872177 +0000 UTC m=+0.106275671 container exec 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:27:55 compute-0 podman[228102]: 2025-11-25 10:27:55.346227637 +0000 UTC m=+0.061707085 container exec_died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:27:55 compute-0 podman[228083]: 2025-11-25 10:27:55.375833468 +0000 UTC m=+0.212236952 container exec_died 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:27:55 compute-0 systemd[1]: libpod-conmon-7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4.scope: Deactivated successfully.
Nov 25 10:27:55 compute-0 sudo[228080]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:56 compute-0 sudo[228264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgnbflojvzfskqshylvpalopalsakmlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066475.6691651-695-230728178797229/AnsiballZ_file.py'
Nov 25 10:27:56 compute-0 sudo[228264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:56 compute-0 python3.9[228266]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:27:56 compute-0 sudo[228264]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:57 compute-0 sudo[228416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypbdgqzbxnzssggsjabsyfckonwmisxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066476.580288-704-256654175024660/AnsiballZ_podman_container_info.py'
Nov 25 10:27:57 compute-0 sudo[228416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:57 compute-0 nova_compute[189381]: 2025-11-25 10:27:57.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:57 compute-0 nova_compute[189381]: 2025-11-25 10:27:57.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:57 compute-0 nova_compute[189381]: 2025-11-25 10:27:57.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:57 compute-0 python3.9[228418]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 25 10:27:57 compute-0 sudo[228416]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:57 compute-0 sudo[228580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxgjsklrzvllhjoznqsnurejkzwecdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066477.5739126-712-247316978636060/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:57 compute-0 sudo[228580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:58 compute-0 python3.9[228582]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:58 compute-0 systemd[1]: Started libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope.
Nov 25 10:27:58 compute-0 podman[228583]: 2025-11-25 10:27:58.273975584 +0000 UTC m=+0.110784332 container exec ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:27:58 compute-0 podman[228583]: 2025-11-25 10:27:58.307038426 +0000 UTC m=+0.143847184 container exec_died ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:27:58 compute-0 systemd[1]: libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope: Deactivated successfully.
Nov 25 10:27:58 compute-0 sudo[228580]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:59 compute-0 sudo[228765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuybmlzuvqcvtpylghvtqdlbmamgoeaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066478.6232958-720-136033593158674/AnsiballZ_podman_container_exec.py'
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.033 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.034 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.034 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.034 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:27:59 compute-0 sudo[228765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.059 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.059 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.059 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:27:59 compute-0 python3.9[228767]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.401 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.403 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5645MB free_disk=72.2640151977539GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.404 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.404 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:27:59 compute-0 systemd[1]: Started libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope.
Nov 25 10:27:59 compute-0 podman[228768]: 2025-11-25 10:27:59.429848423 +0000 UTC m=+0.107608480 container exec ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:27:59 compute-0 podman[228768]: 2025-11-25 10:27:59.463154071 +0000 UTC m=+0.140914128 container exec_died ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.490 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.491 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:27:59 compute-0 systemd[1]: libpod-conmon-ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030.scope: Deactivated successfully.
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.520 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.534 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.536 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:27:59 compute-0 sudo[228765]: pam_unix(sudo:session): session closed for user root
Nov 25 10:27:59 compute-0 nova_compute[189381]: 2025-11-25 10:27:59.536 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:27:59 compute-0 podman[228784]: 2025-11-25 10:27:59.573100598 +0000 UTC m=+0.136633374 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:27:59 compute-0 podman[203557]: time="2025-11-25T10:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:27:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 25 10:27:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4280 "" "Go-http-client/1.1"
Nov 25 10:28:00 compute-0 sudo[228973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krupfefsajgjtkogpczgzhzzkhnlgqyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066479.765236-728-5341108334394/AnsiballZ_file.py'
Nov 25 10:28:00 compute-0 sudo[228973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:00 compute-0 python3.9[228975]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:00 compute-0 sudo[228973]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:00 compute-0 nova_compute[189381]: 2025-11-25 10:28:00.523 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:00 compute-0 nova_compute[189381]: 2025-11-25 10:28:00.524 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:28:00 compute-0 nova_compute[189381]: 2025-11-25 10:28:00.524 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:28:00 compute-0 nova_compute[189381]: 2025-11-25 10:28:00.537 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:28:00 compute-0 nova_compute[189381]: 2025-11-25 10:28:00.538 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:00 compute-0 nova_compute[189381]: 2025-11-25 10:28:00.538 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:28:00 compute-0 sudo[229125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edikjkcmraxmciviqkxmyxwsgurkioam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066480.6438904-737-280660844529982/AnsiballZ_podman_container_info.py'
Nov 25 10:28:00 compute-0 sudo[229125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:01 compute-0 python3.9[229127]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 25 10:28:01 compute-0 sudo[229125]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:01 compute-0 openstack_network_exporter[205722]: ERROR   10:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:28:01 compute-0 openstack_network_exporter[205722]: ERROR   10:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:28:01 compute-0 openstack_network_exporter[205722]: ERROR   10:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:28:01 compute-0 openstack_network_exporter[205722]: ERROR   10:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:28:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:28:01 compute-0 openstack_network_exporter[205722]: ERROR   10:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:28:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:28:01 compute-0 sudo[229290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmyknvurvhdvdelfblvcgbbeeodirwwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066481.5110462-745-226808424712028/AnsiballZ_podman_container_exec.py'
Nov 25 10:28:01 compute-0 sudo[229290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:02 compute-0 python3.9[229292]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:28:02 compute-0 systemd[1]: Started libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope.
Nov 25 10:28:02 compute-0 podman[229293]: 2025-11-25 10:28:02.232142002 +0000 UTC m=+0.111200074 container exec 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public)
Nov 25 10:28:02 compute-0 podman[229293]: 2025-11-25 10:28:02.264464292 +0000 UTC m=+0.143522344 container exec_died 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Nov 25 10:28:02 compute-0 systemd[1]: libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope: Deactivated successfully.
Nov 25 10:28:02 compute-0 sudo[229290]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:02 compute-0 sudo[229471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krcfcrdxhypgatoqsftebaskhdasyhna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066482.5446632-753-87972387147555/AnsiballZ_podman_container_exec.py'
Nov 25 10:28:02 compute-0 sudo[229471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:03 compute-0 python3.9[229473]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:28:03 compute-0 systemd[1]: Started libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope.
Nov 25 10:28:03 compute-0 podman[229474]: 2025-11-25 10:28:03.244456386 +0000 UTC m=+0.098620228 container exec 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 25 10:28:03 compute-0 podman[229474]: 2025-11-25 10:28:03.278316951 +0000 UTC m=+0.132480783 container exec_died 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 10:28:03 compute-0 systemd[1]: libpod-conmon-57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b.scope: Deactivated successfully.
Nov 25 10:28:03 compute-0 sudo[229471]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:03 compute-0 sudo[229655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtcptstgtuderqnkyibwdgcdnspgniuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066483.610931-761-247824250406870/AnsiballZ_file.py'
Nov 25 10:28:03 compute-0 sudo[229655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:04 compute-0 python3.9[229657]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:04 compute-0 sudo[229655]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:04 compute-0 sudo[229822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxunkxbijgvnrgjdjqqqcritjppstqno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066484.4044876-770-173839974109867/AnsiballZ_podman_container_info.py'
Nov 25 10:28:04 compute-0 sudo[229822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:04 compute-0 podman[229781]: 2025-11-25 10:28:04.785935637 +0000 UTC m=+0.078833974 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 25 10:28:04 compute-0 python3.9[229829]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 25 10:28:05 compute-0 sudo[229822]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:05 compute-0 sudo[230006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irjmhvaupzbgnwjhnktmeqtgjhwygfxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066485.4408598-778-228821572644065/AnsiballZ_podman_container_exec.py'
Nov 25 10:28:05 compute-0 podman[229967]: 2025-11-25 10:28:05.818678184 +0000 UTC m=+0.060071998 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:28:05 compute-0 sudo[230006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:05 compute-0 systemd[1]: 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-63f73fb59966c878.service: Main process exited, code=exited, status=1/FAILURE
Nov 25 10:28:05 compute-0 systemd[1]: 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab-63f73fb59966c878.service: Failed with result 'exit-code'.
Nov 25 10:28:06 compute-0 python3.9[230012]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:28:06 compute-0 systemd[1]: Started libpod-conmon-8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.scope.
Nov 25 10:28:06 compute-0 podman[230013]: 2025-11-25 10:28:06.142187622 +0000 UTC m=+0.087218943 container exec 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 10:28:06 compute-0 podman[230013]: 2025-11-25 10:28:06.175324252 +0000 UTC m=+0.120355563 container exec_died 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:28:06 compute-0 systemd[1]: libpod-conmon-8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.scope: Deactivated successfully.
Nov 25 10:28:06 compute-0 sudo[230006]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:06 compute-0 sudo[230206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcvetomtopkeaxtqzasnqyplwcanissc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066486.3973267-786-185113569518124/AnsiballZ_podman_container_exec.py'
Nov 25 10:28:06 compute-0 sudo[230206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:06 compute-0 podman[230168]: 2025-11-25 10:28:06.75446229 +0000 UTC m=+0.079025123 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Nov 25 10:28:07 compute-0 python3.9[230213]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:28:07 compute-0 systemd[1]: Started libpod-conmon-8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.scope.
Nov 25 10:28:07 compute-0 podman[230214]: 2025-11-25 10:28:07.110725376 +0000 UTC m=+0.086377428 container exec 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:28:07 compute-0 podman[230214]: 2025-11-25 10:28:07.141806626 +0000 UTC m=+0.117458678 container exec_died 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:28:07 compute-0 systemd[1]: libpod-conmon-8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab.scope: Deactivated successfully.
Nov 25 10:28:07 compute-0 sudo[230206]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:07 compute-0 sudo[230393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtfretiglfuuzqqoutzgxsaieilahice ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066487.3839753-794-140948998484953/AnsiballZ_file.py'
Nov 25 10:28:07 compute-0 sudo[230393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:07 compute-0 python3.9[230395]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:08 compute-0 sudo[230393]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:08 compute-0 sudo[230545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpfzgsoagcpqnsydlcazlikxulnvnsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066488.3632839-803-187463894479824/AnsiballZ_podman_container_info.py'
Nov 25 10:28:08 compute-0 sudo[230545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:09 compute-0 python3.9[230547]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 25 10:28:09 compute-0 sudo[230545]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:09 compute-0 sudo[230710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjlpnlrjkgkfsiprkirikdgitcxlshn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066489.398425-811-18752969106252/AnsiballZ_podman_container_exec.py'
Nov 25 10:28:09 compute-0 sudo[230710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:09 compute-0 python3.9[230712]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:28:10 compute-0 systemd[1]: Started libpod-conmon-ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.scope.
Nov 25 10:28:10 compute-0 podman[230713]: 2025-11-25 10:28:10.102955083 +0000 UTC m=+0.090949822 container exec ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, release-0.7.12=, name=ubi9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.expose-services=)
Nov 25 10:28:10 compute-0 podman[230713]: 2025-11-25 10:28:10.137803793 +0000 UTC m=+0.125798502 container exec_died ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, config_id=edpm, version=9.4, release-0.7.12=, io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:28:10 compute-0 systemd[1]: libpod-conmon-ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.scope: Deactivated successfully.
Nov 25 10:28:10 compute-0 sudo[230710]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:10 compute-0 sudo[230890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpmzvzvlppgngslrntmwsohspzswatvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066490.3916304-819-79951126377300/AnsiballZ_podman_container_exec.py'
Nov 25 10:28:10 compute-0 sudo[230890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:10 compute-0 python3.9[230892]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 25 10:28:11 compute-0 systemd[1]: Started libpod-conmon-ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.scope.
Nov 25 10:28:11 compute-0 podman[230893]: 2025-11-25 10:28:11.097309163 +0000 UTC m=+0.099352968 container exec ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, architecture=x86_64, name=ubi9, vendor=Red Hat, Inc., version=9.4, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543)
Nov 25 10:28:11 compute-0 podman[230893]: 2025-11-25 10:28:11.129731732 +0000 UTC m=+0.131775547 container exec_died ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, config_id=edpm)
Nov 25 10:28:11 compute-0 systemd[1]: libpod-conmon-ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34.scope: Deactivated successfully.
Nov 25 10:28:11 compute-0 sudo[230890]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:11 compute-0 sudo[231073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knrodhlibyycpukjthznqjlcryiuxegd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066491.421056-827-76627932227757/AnsiballZ_file.py'
Nov 25 10:28:11 compute-0 sudo[231073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:11 compute-0 python3.9[231075]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:11 compute-0 sudo[231073]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:12 compute-0 sudo[231225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjhqifxgsrgfcglarbeaoaiomckstud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066492.2393646-836-18238255706417/AnsiballZ_file.py'
Nov 25 10:28:12 compute-0 sudo[231225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:12 compute-0 python3.9[231227]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:12 compute-0 sudo[231225]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:13 compute-0 sudo[231377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adcvueblbdqvnlngbngvjazdvdcqgsci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066493.0104125-844-188736793220569/AnsiballZ_stat.py'
Nov 25 10:28:13 compute-0 sudo[231377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:13 compute-0 python3.9[231379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:13 compute-0 sudo[231377]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:13 compute-0 podman[231450]: 2025-11-25 10:28:13.957867457 +0000 UTC m=+0.069612048 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:28:14 compute-0 sudo[231516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbvihppgdbpzmmulgwraorkpoimfnicr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066493.0104125-844-188736793220569/AnsiballZ_copy.py'
Nov 25 10:28:14 compute-0 sudo[231516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:14 compute-0 python3.9[231518]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764066493.0104125-844-188736793220569/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:14 compute-0 sudo[231516]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:15 compute-0 sudo[231668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbtnookbxxjrcarpbpwlvwckjmaojxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066494.7164743-860-155231093478558/AnsiballZ_file.py'
Nov 25 10:28:15 compute-0 sudo[231668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:15 compute-0 python3.9[231670]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:15 compute-0 sudo[231668]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:15 compute-0 sudo[231820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bywnxynouzbkhikihlosfjqvqhawwxsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066495.4525514-868-154441344565205/AnsiballZ_stat.py'
Nov 25 10:28:15 compute-0 sudo[231820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:16 compute-0 python3.9[231822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:16 compute-0 sudo[231820]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:16 compute-0 sudo[231898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkyjlexoyfmszfkniokarrtaysrdohfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066495.4525514-868-154441344565205/AnsiballZ_file.py'
Nov 25 10:28:16 compute-0 sudo[231898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:16 compute-0 python3.9[231900]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:16 compute-0 sudo[231898]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:17 compute-0 podman[232024]: 2025-11-25 10:28:17.126468765 +0000 UTC m=+0.055676120 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:28:17 compute-0 sudo[232066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzqmeapnyokdsvjdsesinxwqurgjovai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066496.7695029-880-122995638933749/AnsiballZ_stat.py'
Nov 25 10:28:17 compute-0 sudo[232066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:17 compute-0 python3.9[232075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:17 compute-0 sudo[232066]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:17 compute-0 sudo[232151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfnmeefuslcvsardckbrcacwpgmjeuvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066496.7695029-880-122995638933749/AnsiballZ_file.py'
Nov 25 10:28:17 compute-0 sudo[232151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:17 compute-0 python3.9[232153]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zwb1koxy recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:17 compute-0 sudo[232151]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:18 compute-0 sudo[232303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjphqzzweramotwamqbjlzjnpzrlgvel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066498.1169608-892-220265392221887/AnsiballZ_stat.py'
Nov 25 10:28:18 compute-0 sudo[232303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:18 compute-0 python3.9[232305]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:18 compute-0 sudo[232303]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:18 compute-0 podman[232331]: 2025-11-25 10:28:18.969197751 +0000 UTC m=+0.077017715 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 10:28:19 compute-0 sudo[232402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlcuovhbkokbhpmfrmztdvdzqrcyqqto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066498.1169608-892-220265392221887/AnsiballZ_file.py'
Nov 25 10:28:19 compute-0 sudo[232402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:19 compute-0 python3.9[232404]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:19 compute-0 sudo[232402]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:19 compute-0 podman[232505]: 2025-11-25 10:28:19.998275806 +0000 UTC m=+0.110543146 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:28:20 compute-0 sudo[232581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fonfhoaseuteoqznydkpusibncxmqvsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066499.6971743-905-117256830833856/AnsiballZ_command.py'
Nov 25 10:28:20 compute-0 sudo[232581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:20 compute-0 python3.9[232583]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:28:20 compute-0 sudo[232581]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:20 compute-0 sudo[232734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhxroqmsqusmxtyoyygiqzmpqynvovxr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066500.4738638-913-106411358581399/AnsiballZ_edpm_nftables_from_files.py'
Nov 25 10:28:20 compute-0 sudo[232734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:21 compute-0 python3[232736]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 25 10:28:21 compute-0 sudo[232734]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:21 compute-0 sudo[232886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adikobcardrjfofjznvwvwxnsjdgjvrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066501.4280856-921-37023848451680/AnsiballZ_stat.py'
Nov 25 10:28:21 compute-0 sudo[232886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:22 compute-0 python3.9[232888]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:22 compute-0 sudo[232886]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:22 compute-0 sudo[232964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uutrkqwdvsomumvmxpwjqgpddedivucx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066501.4280856-921-37023848451680/AnsiballZ_file.py'
Nov 25 10:28:22 compute-0 sudo[232964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:22 compute-0 python3.9[232966]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:22 compute-0 sudo[232964]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:23 compute-0 sudo[233116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tybempvtsvoygglhbpkobilngyrpyoqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066502.782338-933-138621428609802/AnsiballZ_stat.py'
Nov 25 10:28:23 compute-0 sudo[233116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:23 compute-0 python3.9[233118]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:23 compute-0 sudo[233116]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:23 compute-0 sudo[233194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xukizdsoqdprcxyryelpknpsmuvrmptp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066502.782338-933-138621428609802/AnsiballZ_file.py'
Nov 25 10:28:23 compute-0 sudo[233194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:24 compute-0 python3.9[233196]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:24 compute-0 sudo[233194]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:24 compute-0 sudo[233359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqjmxkgdsakmtwpyqgymoypkiepsqwza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066504.2809546-945-281473539883122/AnsiballZ_stat.py'
Nov 25 10:28:24 compute-0 sudo[233359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:24 compute-0 podman[233320]: 2025-11-25 10:28:24.699098865 +0000 UTC m=+0.080619470 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:28:24 compute-0 python3.9[233368]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:24 compute-0 sudo[233359]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:25 compute-0 sudo[233444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izmsovdkxisdpaiqmsctngkwzddhsrah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066504.2809546-945-281473539883122/AnsiballZ_file.py'
Nov 25 10:28:25 compute-0 sudo[233444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:25 compute-0 python3.9[233446]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:25 compute-0 sudo[233444]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:26 compute-0 sudo[233596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tczcyirecijcchxfkqfkiupioecddsci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066505.6506639-957-92208619075754/AnsiballZ_stat.py'
Nov 25 10:28:26 compute-0 sudo[233596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:26 compute-0 python3.9[233598]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:26 compute-0 sudo[233596]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:26 compute-0 sudo[233674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtlfpgmvnlpimihobiitkqfglnpjlgvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066505.6506639-957-92208619075754/AnsiballZ_file.py'
Nov 25 10:28:26 compute-0 sudo[233674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:26 compute-0 python3.9[233676]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:26 compute-0 sudo[233674]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:27 compute-0 sudo[233826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiafpfbpzotcowsptvcaknsvygketbdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066507.0403695-969-105890148754306/AnsiballZ_stat.py'
Nov 25 10:28:27 compute-0 sudo[233826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:27 compute-0 python3.9[233828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:27 compute-0 sudo[233826]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:28 compute-0 sudo[233951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nikhocsactgynimftsorbzqmsbnsijqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066507.0403695-969-105890148754306/AnsiballZ_copy.py'
Nov 25 10:28:28 compute-0 sudo[233951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:28 compute-0 python3.9[233953]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764066507.0403695-969-105890148754306/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:28 compute-0 sudo[233951]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:28 compute-0 sudo[234103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnpywjrxmzukpsdvkoyymxgetwqbkmbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066508.6294181-984-14661971180535/AnsiballZ_file.py'
Nov 25 10:28:28 compute-0 sudo[234103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:29 compute-0 python3.9[234105]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:29 compute-0 sudo[234103]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:29 compute-0 podman[203557]: time="2025-11-25T10:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:28:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 25 10:28:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4276 "" "Go-http-client/1.1"
Nov 25 10:28:29 compute-0 podman[234229]: 2025-11-25 10:28:29.820625666 +0000 UTC m=+0.070870255 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:28:29 compute-0 sudo[234273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ychcgaedsnmfyzmuglfyunqgetohzcuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066509.4064791-992-11391593730971/AnsiballZ_command.py'
Nov 25 10:28:29 compute-0 sudo[234273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:30 compute-0 python3.9[234282]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:28:30 compute-0 sudo[234273]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:30 compute-0 sudo[234435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfhxihfpwwiekqfmgkixsfghbrdazuoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066510.310484-1000-280802930188231/AnsiballZ_blockinfile.py'
Nov 25 10:28:30 compute-0 sudo[234435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:31 compute-0 python3.9[234437]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:31 compute-0 sudo[234435]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:31 compute-0 openstack_network_exporter[205722]: ERROR   10:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:28:31 compute-0 openstack_network_exporter[205722]: ERROR   10:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:28:31 compute-0 openstack_network_exporter[205722]: ERROR   10:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:28:31 compute-0 openstack_network_exporter[205722]: ERROR   10:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:28:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:28:31 compute-0 openstack_network_exporter[205722]: ERROR   10:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:28:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:28:31 compute-0 sudo[234587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkteedmodsfbxprdjblipfgvmahohway ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066511.3008015-1009-20680134945587/AnsiballZ_command.py'
Nov 25 10:28:31 compute-0 sudo[234587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:31 compute-0 python3.9[234589]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:28:31 compute-0 sudo[234587]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:32 compute-0 sudo[234740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haptfvjaezdwtbxawbpsuncvxlmktsbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066512.0967402-1017-128098572056430/AnsiballZ_stat.py'
Nov 25 10:28:32 compute-0 sudo[234740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:32 compute-0 python3.9[234742]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 25 10:28:32 compute-0 sudo[234740]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:33 compute-0 sudo[234894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgziiovlftdmjdjlqbaledgfqyutlwuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066512.8562882-1025-264863514668089/AnsiballZ_command.py'
Nov 25 10:28:33 compute-0 sudo[234894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:33 compute-0 python3.9[234896]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:28:33 compute-0 sudo[234894]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:33 compute-0 sudo[235049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtmjinuxvnffehzakbwpgpmruqgfabur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066513.616183-1033-223939806928160/AnsiballZ_file.py'
Nov 25 10:28:33 compute-0 sudo[235049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:34 compute-0 python3.9[235051]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:34 compute-0 sudo[235049]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:34 compute-0 sshd-session[214827]: Connection closed by 192.168.122.30 port 46596
Nov 25 10:28:34 compute-0 sshd-session[214824]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:28:34 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Nov 25 10:28:34 compute-0 systemd[1]: session-27.scope: Consumed 1min 23.851s CPU time.
Nov 25 10:28:34 compute-0 systemd-logind[822]: Session 27 logged out. Waiting for processes to exit.
Nov 25 10:28:34 compute-0 systemd-logind[822]: Removed session 27.
Nov 25 10:28:34 compute-0 podman[235076]: 2025-11-25 10:28:34.962025017 +0000 UTC m=+0.077744126 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Nov 25 10:28:35 compute-0 podman[235096]: 2025-11-25 10:28:35.942484471 +0000 UTC m=+0.062145190 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:28:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:28:36.022 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:28:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:28:36.023 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:28:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:28:36.023 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:28:36 compute-0 podman[235115]: 2025-11-25 10:28:36.974826932 +0000 UTC m=+0.080894128 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, vcs-type=git, version=9.4, container_name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 25 10:28:40 compute-0 sshd-session[235133]: Accepted publickey for zuul from 192.168.122.30 port 60892 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 10:28:40 compute-0 systemd-logind[822]: New session 28 of user zuul.
Nov 25 10:28:40 compute-0 systemd[1]: Started Session 28 of User zuul.
Nov 25 10:28:40 compute-0 sshd-session[235133]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:28:41 compute-0 python3.9[235286]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:28:43 compute-0 sudo[235440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmgumvenejswrzrapgrllonyymjgmgvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066522.4053879-34-158244058892002/AnsiballZ_systemd.py'
Nov 25 10:28:43 compute-0 sudo[235440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:43 compute-0 python3.9[235442]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 25 10:28:43 compute-0 sudo[235440]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:43 compute-0 sudo[235593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpmxbnaubdkfgyffdkytbtjqosnhrvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066523.58725-42-236277030191336/AnsiballZ_setup.py'
Nov 25 10:28:43 compute-0 sudo[235593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:44 compute-0 python3.9[235595]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 25 10:28:44 compute-0 sudo[235593]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:44 compute-0 podman[235604]: 2025-11-25 10:28:44.737159995 +0000 UTC m=+0.064147798 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:28:45 compute-0 sudo[235695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prhiwtkpiuskskozamxprdfmtbiuwznt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066523.58725-42-236277030191336/AnsiballZ_dnf.py'
Nov 25 10:28:45 compute-0 sudo[235695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:45 compute-0 python3.9[235697]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 25 10:28:47 compute-0 podman[235704]: 2025-11-25 10:28:47.957715295 +0000 UTC m=+0.059563734 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:28:48 compute-0 sudo[235695]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:48 compute-0 sudo[235874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpebvfbgiduyovyjpjfnsgcchrxnooot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066528.2428155-54-104213950735192/AnsiballZ_stat.py'
Nov 25 10:28:48 compute-0 sudo[235874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:48 compute-0 python3.9[235876]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:49 compute-0 sudo[235874]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:49 compute-0 sudo[236013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfqeiarmhcodttxaaxenqfjomaqoojla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066528.2428155-54-104213950735192/AnsiballZ_copy.py'
Nov 25 10:28:49 compute-0 sudo[236013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:49 compute-0 podman[235972]: 2025-11-25 10:28:49.675193386 +0000 UTC m=+0.109012371 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 10:28:49 compute-0 python3.9[236021]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066528.2428155-54-104213950735192/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:49 compute-0 sudo[236013]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:50 compute-0 sudo[236186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqcdbvaddpbspgdstcawlpffeisnzroy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066530.070848-69-43358872044097/AnsiballZ_file.py'
Nov 25 10:28:50 compute-0 sudo[236186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:50 compute-0 podman[236145]: 2025-11-25 10:28:50.67376293 +0000 UTC m=+0.117454889 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 10:28:50 compute-0 python3.9[236191]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:50 compute-0 sudo[236186]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:51 compute-0 sudo[236347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bukvjbgyspblwucnnzfwpooukbtuwdwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066531.0914075-77-93129424718138/AnsiballZ_stat.py'
Nov 25 10:28:51 compute-0 sudo[236347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:51 compute-0 python3.9[236349]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 25 10:28:51 compute-0 sudo[236347]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:52 compute-0 sudo[236470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjotdbodtgfdknyyrycpduxbqryayaai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066531.0914075-77-93129424718138/AnsiballZ_copy.py'
Nov 25 10:28:52 compute-0 sudo[236470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:52 compute-0 python3.9[236472]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764066531.0914075-77-93129424718138/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 25 10:28:52 compute-0 sudo[236470]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:52 compute-0 sudo[236622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mttpwhfzmcrwkocpxazphbiqeznnmudb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764066532.5003803-92-13463573653780/AnsiballZ_systemd.py'
Nov 25 10:28:52 compute-0 sudo[236622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:28:53 compute-0 python3.9[236624]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 25 10:28:53 compute-0 systemd[1]: Stopping System Logging Service...
Nov 25 10:28:53 compute-0 rsyslogd[1010]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1010" x-info="https://www.rsyslog.com"] exiting on signal 15.
Nov 25 10:28:53 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Nov 25 10:28:53 compute-0 systemd[1]: Stopped System Logging Service.
Nov 25 10:28:53 compute-0 systemd[1]: rsyslog.service: Consumed 4.165s CPU time, 9.4M memory peak, read 0B from disk, written 6.6M to disk.
Nov 25 10:28:53 compute-0 systemd[1]: Starting System Logging Service...
Nov 25 10:28:53 compute-0 rsyslogd[236628]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236628" x-info="https://www.rsyslog.com"] start
Nov 25 10:28:53 compute-0 systemd[1]: Started System Logging Service.
Nov 25 10:28:53 compute-0 rsyslogd[236628]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:28:53 compute-0 rsyslogd[236628]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Nov 25 10:28:53 compute-0 rsyslogd[236628]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Nov 25 10:28:53 compute-0 rsyslogd[236628]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Nov 25 10:28:53 compute-0 sudo[236622]: pam_unix(sudo:session): session closed for user root
Nov 25 10:28:53 compute-0 rsyslogd[236628]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Nov 25 10:28:54 compute-0 sshd-session[235136]: Connection closed by 192.168.122.30 port 60892
Nov 25 10:28:54 compute-0 sshd-session[235133]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:28:54 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 25 10:28:54 compute-0 systemd[1]: session-28.scope: Consumed 10.037s CPU time.
Nov 25 10:28:54 compute-0 systemd-logind[822]: Session 28 logged out. Waiting for processes to exit.
Nov 25 10:28:54 compute-0 systemd-logind[822]: Removed session 28.
Nov 25 10:28:54 compute-0 podman[236657]: 2025-11-25 10:28:54.967142084 +0000 UTC m=+0.078898520 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 10:28:56 compute-0 nova_compute[189381]: 2025-11-25 10:28:56.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:56 compute-0 nova_compute[189381]: 2025-11-25 10:28:56.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:28:56 compute-0 nova_compute[189381]: 2025-11-25 10:28:56.050 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:28:56 compute-0 nova_compute[189381]: 2025-11-25 10:28:56.050 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:56 compute-0 nova_compute[189381]: 2025-11-25 10:28:56.050 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:28:56 compute-0 nova_compute[189381]: 2025-11-25 10:28:56.071 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:57 compute-0 nova_compute[189381]: 2025-11-25 10:28:57.079 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:57 compute-0 nova_compute[189381]: 2025-11-25 10:28:57.082 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:57 compute-0 sshd-session[236677]: Connection closed by authenticating user root 171.244.51.45 port 49128 [preauth]
Nov 25 10:28:58 compute-0 nova_compute[189381]: 2025-11-25 10:28:58.019 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.050 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.357 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.358 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5724MB free_disk=72.26263046264648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.359 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.359 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.543 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.543 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.629 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.715 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.715 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.737 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:28:59 compute-0 podman[203557]: time="2025-11-25T10:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:28:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.770 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:28:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4275 "" "Go-http-client/1.1"
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.805 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.819 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.821 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:28:59 compute-0 nova_compute[189381]: 2025-11-25 10:28:59.821 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.462s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:28:59 compute-0 podman[236679]: 2025-11-25 10:28:59.951981674 +0000 UTC m=+0.065770205 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:29:00 compute-0 nova_compute[189381]: 2025-11-25 10:29:00.821 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:00 compute-0 nova_compute[189381]: 2025-11-25 10:29:00.821 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:00 compute-0 nova_compute[189381]: 2025-11-25 10:29:00.821 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:29:01 compute-0 nova_compute[189381]: 2025-11-25 10:29:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:01 compute-0 nova_compute[189381]: 2025-11-25 10:29:01.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:29:01 compute-0 nova_compute[189381]: 2025-11-25 10:29:01.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:29:01 compute-0 nova_compute[189381]: 2025-11-25 10:29:01.038 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:29:01 compute-0 openstack_network_exporter[205722]: ERROR   10:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:29:01 compute-0 openstack_network_exporter[205722]: ERROR   10:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:29:01 compute-0 openstack_network_exporter[205722]: ERROR   10:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:29:01 compute-0 openstack_network_exporter[205722]: ERROR   10:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:29:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:29:01 compute-0 openstack_network_exporter[205722]: ERROR   10:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:29:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.325 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.327 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081296d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.345 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.346 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:29:03.347 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:29:05 compute-0 podman[236704]: 2025-11-25 10:29:05.961421039 +0000 UTC m=+0.077509739 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:29:06 compute-0 podman[236724]: 2025-11-25 10:29:06.059303694 +0000 UTC m=+0.074206033 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 10:29:07 compute-0 podman[236744]: 2025-11-25 10:29:07.962850611 +0000 UTC m=+0.079585230 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Nov 25 10:29:14 compute-0 podman[236763]: 2025-11-25 10:29:14.973061177 +0000 UTC m=+0.073427207 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:29:18 compute-0 podman[236783]: 2025-11-25 10:29:18.943978284 +0000 UTC m=+0.050907641 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:29:19 compute-0 podman[236808]: 2025-11-25 10:29:19.969839479 +0000 UTC m=+0.088340544 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:29:20 compute-0 podman[236829]: 2025-11-25 10:29:20.979022284 +0000 UTC m=+0.092500543 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 10:29:24 compute-0 sshd-session[236854]: Accepted publickey for zuul from 38.102.83.176 port 53748 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 10:29:24 compute-0 systemd-logind[822]: New session 29 of user zuul.
Nov 25 10:29:24 compute-0 systemd[1]: Started Session 29 of User zuul.
Nov 25 10:29:24 compute-0 sshd-session[236854]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:29:25 compute-0 podman[237006]: 2025-11-25 10:29:25.432737465 +0000 UTC m=+0.096552200 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:29:25 compute-0 python3[237046]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:29:27 compute-0 sudo[237270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eovhlbklxhzokfmhzkzcngiftcuznfpi ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066566.9090204-37051-146790731547543/AnsiballZ_command.py'
Nov 25 10:29:27 compute-0 sudo[237270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:29:27 compute-0 python3[237272]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:29:27 compute-0 sudo[237270]: pam_unix(sudo:session): session closed for user root
Nov 25 10:29:28 compute-0 sudo[237423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psdfgbnxefxdmrernplhwlpibhgqlshi ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066567.9605668-37062-216728578221790/AnsiballZ_command.py'
Nov 25 10:29:28 compute-0 sudo[237423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:29:28 compute-0 python3[237425]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:29:29 compute-0 podman[203557]: time="2025-11-25T10:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:29:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:29:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4287 "" "Go-http-client/1.1"
Nov 25 10:29:29 compute-0 sudo[237423]: pam_unix(sudo:session): session closed for user root
Nov 25 10:29:30 compute-0 podman[237551]: 2025-11-25 10:29:30.962129756 +0000 UTC m=+0.074632881 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:29:31 compute-0 python3[237589]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 25 10:29:31 compute-0 openstack_network_exporter[205722]: ERROR   10:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:29:31 compute-0 openstack_network_exporter[205722]: ERROR   10:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:29:31 compute-0 openstack_network_exporter[205722]: ERROR   10:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:29:31 compute-0 openstack_network_exporter[205722]: ERROR   10:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:29:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:29:31 compute-0 openstack_network_exporter[205722]: ERROR   10:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:29:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:29:31 compute-0 sudo[237751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmndptpaxysrygkxwknipmjewkqxtdiv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066571.5157335-37106-273352067519114/AnsiballZ_setup.py'
Nov 25 10:29:31 compute-0 sudo[237751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:29:32 compute-0 python3[237753]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 25 10:29:33 compute-0 sudo[237751]: pam_unix(sudo:session): session closed for user root
Nov 25 10:29:34 compute-0 sudo[237976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llyihztgjqzgzjjnixoqobxgrvwvqxdo ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066573.700205-37135-222689541395890/AnsiballZ_command.py'
Nov 25 10:29:34 compute-0 sudo[237976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:29:34 compute-0 python3[237978]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:29:34 compute-0 sudo[237976]: pam_unix(sudo:session): session closed for user root
Nov 25 10:29:34 compute-0 sudo[238141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-croyienpmwnjmxrnkwpeaiksxelultek ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764066574.6085863-37152-248328176115099/AnsiballZ_command.py'
Nov 25 10:29:34 compute-0 sudo[238141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:29:35 compute-0 python3[238143]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:29:35 compute-0 sudo[238141]: pam_unix(sudo:session): session closed for user root
Nov 25 10:29:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:29:36.026 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:29:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:29:36.028 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:29:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:29:36.028 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:29:36 compute-0 podman[238181]: 2025-11-25 10:29:36.980276023 +0000 UTC m=+0.086102840 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:29:37 compute-0 podman[238182]: 2025-11-25 10:29:37.002908002 +0000 UTC m=+0.108473761 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 10:29:38 compute-0 podman[238216]: 2025-11-25 10:29:38.95488072 +0000 UTC m=+0.071953544 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, distribution-scope=public, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0)
Nov 25 10:29:45 compute-0 podman[238235]: 2025-11-25 10:29:45.954851729 +0000 UTC m=+0.066922880 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 10:29:49 compute-0 podman[238255]: 2025-11-25 10:29:49.950675521 +0000 UTC m=+0.064269524 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:29:50 compute-0 podman[238280]: 2025-11-25 10:29:50.94496584 +0000 UTC m=+0.064276624 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 25 10:29:51 compute-0 podman[238301]: 2025-11-25 10:29:51.983976162 +0000 UTC m=+0.102382226 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:29:55 compute-0 podman[238327]: 2025-11-25 10:29:55.964094564 +0000 UTC m=+0.072210732 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:29:57 compute-0 nova_compute[189381]: 2025-11-25 10:29:57.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:57 compute-0 nova_compute[189381]: 2025-11-25 10:29:57.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.017 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.046 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.351 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.353 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5716MB free_disk=72.26259231567383GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.353 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.354 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.414 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.415 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.437 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.449 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.451 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:29:59 compute-0 nova_compute[189381]: 2025-11-25 10:29:59.451 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:29:59 compute-0 podman[203557]: time="2025-11-25T10:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:29:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:29:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4282 "" "Go-http-client/1.1"
Nov 25 10:30:00 compute-0 nova_compute[189381]: 2025-11-25 10:30:00.453 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:00 compute-0 nova_compute[189381]: 2025-11-25 10:30:00.455 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:00 compute-0 nova_compute[189381]: 2025-11-25 10:30:00.455 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:30:01 compute-0 nova_compute[189381]: 2025-11-25 10:30:01.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:01 compute-0 nova_compute[189381]: 2025-11-25 10:30:01.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:30:01 compute-0 nova_compute[189381]: 2025-11-25 10:30:01.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:30:01 compute-0 nova_compute[189381]: 2025-11-25 10:30:01.036 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:30:01 compute-0 openstack_network_exporter[205722]: ERROR   10:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:30:01 compute-0 openstack_network_exporter[205722]: ERROR   10:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:30:01 compute-0 openstack_network_exporter[205722]: ERROR   10:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:30:01 compute-0 openstack_network_exporter[205722]: ERROR   10:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:30:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:30:01 compute-0 openstack_network_exporter[205722]: ERROR   10:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:30:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:30:01 compute-0 podman[238345]: 2025-11-25 10:30:01.943497821 +0000 UTC m=+0.055749790 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:30:02 compute-0 nova_compute[189381]: 2025-11-25 10:30:02.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:03 compute-0 nova_compute[189381]: 2025-11-25 10:30:03.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:07 compute-0 podman[238367]: 2025-11-25 10:30:07.951171468 +0000 UTC m=+0.066525659 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:30:07 compute-0 podman[238368]: 2025-11-25 10:30:07.966087485 +0000 UTC m=+0.073723995 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:30:09 compute-0 podman[238403]: 2025-11-25 10:30:09.948443445 +0000 UTC m=+0.066206099 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1214.1726694543, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 25 10:30:16 compute-0 podman[238423]: 2025-11-25 10:30:16.947987914 +0000 UTC m=+0.062748704 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 10:30:20 compute-0 podman[238443]: 2025-11-25 10:30:20.944244421 +0000 UTC m=+0.061517989 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:30:21 compute-0 podman[238467]: 2025-11-25 10:30:21.053973512 +0000 UTC m=+0.067295063 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:30:23 compute-0 podman[238487]: 2025-11-25 10:30:23.002030398 +0000 UTC m=+0.116237980 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:30:26 compute-0 podman[238513]: 2025-11-25 10:30:26.9604825 +0000 UTC m=+0.072035961 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 25 10:30:29 compute-0 podman[203557]: time="2025-11-25T10:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:30:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:30:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4284 "" "Go-http-client/1.1"
Nov 25 10:30:31 compute-0 openstack_network_exporter[205722]: ERROR   10:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:30:31 compute-0 openstack_network_exporter[205722]: ERROR   10:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:30:31 compute-0 openstack_network_exporter[205722]: ERROR   10:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:30:31 compute-0 openstack_network_exporter[205722]: ERROR   10:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:30:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:30:31 compute-0 openstack_network_exporter[205722]: ERROR   10:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:30:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:30:32 compute-0 podman[238534]: 2025-11-25 10:30:32.931477166 +0000 UTC m=+0.050854302 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:30:34 compute-0 sshd-session[236857]: Received disconnect from 38.102.83.176 port 53748:11: disconnected by user
Nov 25 10:30:34 compute-0 sshd-session[236857]: Disconnected from user zuul 38.102.83.176 port 53748
Nov 25 10:30:34 compute-0 sshd-session[236854]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:30:34 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 25 10:30:34 compute-0 systemd[1]: session-29.scope: Consumed 8.473s CPU time.
Nov 25 10:30:34 compute-0 systemd-logind[822]: Session 29 logged out. Waiting for processes to exit.
Nov 25 10:30:34 compute-0 systemd-logind[822]: Removed session 29.
Nov 25 10:30:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:30:36.026 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:30:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:30:36.026 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:30:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:30:36.026 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:30:38 compute-0 podman[238559]: 2025-11-25 10:30:38.970215111 +0000 UTC m=+0.078904637 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 10:30:38 compute-0 podman[238558]: 2025-11-25 10:30:38.996162146 +0000 UTC m=+0.113306165 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 10:30:40 compute-0 podman[238594]: 2025-11-25 10:30:40.961386644 +0000 UTC m=+0.077011633 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:30:47 compute-0 podman[238612]: 2025-11-25 10:30:47.935140351 +0000 UTC m=+0.053285992 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:30:51 compute-0 podman[238634]: 2025-11-25 10:30:51.95515262 +0000 UTC m=+0.065186444 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:30:51 compute-0 podman[238633]: 2025-11-25 10:30:51.960620027 +0000 UTC m=+0.073804531 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1755695350, name=ubi9-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 25 10:30:54 compute-0 podman[238673]: 2025-11-25 10:30:54.028399982 +0000 UTC m=+0.139169919 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 25 10:30:57 compute-0 podman[238698]: 2025-11-25 10:30:57.957575692 +0000 UTC m=+0.070585479 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:30:58 compute-0 nova_compute[189381]: 2025-11-25 10:30:58.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.053 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.054 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.377 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.379 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5714MB free_disk=72.2625732421875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.379 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.379 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.457 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.457 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.486 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.502 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.505 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:30:59 compute-0 nova_compute[189381]: 2025-11-25 10:30:59.506 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:30:59 compute-0 podman[203557]: time="2025-11-25T10:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:30:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:30:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4285 "" "Go-http-client/1.1"
Nov 25 10:31:00 compute-0 nova_compute[189381]: 2025-11-25 10:31:00.507 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:31:00 compute-0 nova_compute[189381]: 2025-11-25 10:31:00.507 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:31:01 compute-0 nova_compute[189381]: 2025-11-25 10:31:01.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:31:01 compute-0 nova_compute[189381]: 2025-11-25 10:31:01.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:31:01 compute-0 nova_compute[189381]: 2025-11-25 10:31:01.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:31:01 compute-0 openstack_network_exporter[205722]: ERROR   10:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:31:01 compute-0 openstack_network_exporter[205722]: ERROR   10:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:31:01 compute-0 openstack_network_exporter[205722]: ERROR   10:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:31:01 compute-0 openstack_network_exporter[205722]: ERROR   10:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:31:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:31:01 compute-0 openstack_network_exporter[205722]: ERROR   10:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:31:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:31:02 compute-0 nova_compute[189381]: 2025-11-25 10:31:02.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:31:02 compute-0 nova_compute[189381]: 2025-11-25 10:31:02.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:31:02 compute-0 nova_compute[189381]: 2025-11-25 10:31:02.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:31:02 compute-0 nova_compute[189381]: 2025-11-25 10:31:02.037 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:31:03 compute-0 nova_compute[189381]: 2025-11-25 10:31:03.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.326 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.327 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:31:03.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:31:03 compute-0 podman[238718]: 2025-11-25 10:31:03.942955104 +0000 UTC m=+0.058483191 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:31:09 compute-0 podman[238743]: 2025-11-25 10:31:09.948318168 +0000 UTC m=+0.065666337 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 25 10:31:09 compute-0 podman[238744]: 2025-11-25 10:31:09.95743846 +0000 UTC m=+0.070789244 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:31:11 compute-0 podman[238783]: 2025-11-25 10:31:11.961404631 +0000 UTC m=+0.080952597 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, release=1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, version=9.4, architecture=x86_64)
Nov 25 10:31:18 compute-0 podman[238802]: 2025-11-25 10:31:18.975106237 +0000 UTC m=+0.089891533 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:31:22 compute-0 podman[238821]: 2025-11-25 10:31:22.94989168 +0000 UTC m=+0.069082827 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Nov 25 10:31:22 compute-0 podman[238822]: 2025-11-25 10:31:22.949926771 +0000 UTC m=+0.062811254 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:31:24 compute-0 podman[238867]: 2025-11-25 10:31:24.999413531 +0000 UTC m=+0.118296145 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:31:28 compute-0 podman[238893]: 2025-11-25 10:31:28.943516516 +0000 UTC m=+0.063101024 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:31:29 compute-0 podman[203557]: time="2025-11-25T10:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:31:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:31:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4280 "" "Go-http-client/1.1"
Nov 25 10:31:31 compute-0 openstack_network_exporter[205722]: ERROR   10:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:31:31 compute-0 openstack_network_exporter[205722]: ERROR   10:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:31:31 compute-0 openstack_network_exporter[205722]: ERROR   10:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:31:31 compute-0 openstack_network_exporter[205722]: ERROR   10:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:31:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:31:31 compute-0 openstack_network_exporter[205722]: ERROR   10:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:31:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:31:34 compute-0 podman[238912]: 2025-11-25 10:31:34.934900911 +0000 UTC m=+0.054375708 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:31:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:31:36.028 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:31:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:31:36.028 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:31:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:31:36.029 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:31:40 compute-0 podman[238936]: 2025-11-25 10:31:40.950386357 +0000 UTC m=+0.065725080 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 10:31:40 compute-0 podman[238935]: 2025-11-25 10:31:40.973267136 +0000 UTC m=+0.092073060 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:31:42 compute-0 podman[238971]: 2025-11-25 10:31:42.946618543 +0000 UTC m=+0.065846263 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, distribution-scope=public, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Nov 25 10:31:46 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:31:46.393 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:31:46 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:31:46.394 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:31:46 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:31:46.395 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:31:49 compute-0 podman[238991]: 2025-11-25 10:31:49.942859461 +0000 UTC m=+0.061909319 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:31:53 compute-0 podman[239010]: 2025-11-25 10:31:53.961828664 +0000 UTC m=+0.067129451 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter)
Nov 25 10:31:53 compute-0 podman[239011]: 2025-11-25 10:31:53.966680865 +0000 UTC m=+0.068986604 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:31:56 compute-0 podman[239052]: 2025-11-25 10:31:56.03418469 +0000 UTC m=+0.145905290 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:31:58 compute-0 nova_compute[189381]: 2025-11-25 10:31:58.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:31:59 compute-0 podman[203557]: time="2025-11-25T10:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:31:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:31:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4288 "" "Go-http-client/1.1"
Nov 25 10:31:59 compute-0 podman[239079]: 2025-11-25 10:31:59.944995416 +0000 UTC m=+0.061934519 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 10:32:00 compute-0 nova_compute[189381]: 2025-11-25 10:32:00.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.056 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.056 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.057 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.387 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.389 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5695MB free_disk=72.2625732421875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.389 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.389 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:32:01 compute-0 openstack_network_exporter[205722]: ERROR   10:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:32:01 compute-0 openstack_network_exporter[205722]: ERROR   10:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:32:01 compute-0 openstack_network_exporter[205722]: ERROR   10:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:32:01 compute-0 openstack_network_exporter[205722]: ERROR   10:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:32:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:32:01 compute-0 openstack_network_exporter[205722]: ERROR   10:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:32:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.465 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.465 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.497 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.509 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.510 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:32:01 compute-0 nova_compute[189381]: 2025-11-25 10:32:01.510 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:32:02 compute-0 nova_compute[189381]: 2025-11-25 10:32:02.504 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:02 compute-0 nova_compute[189381]: 2025-11-25 10:32:02.505 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:02 compute-0 nova_compute[189381]: 2025-11-25 10:32:02.505 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:32:02 compute-0 nova_compute[189381]: 2025-11-25 10:32:02.505 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:32:02 compute-0 nova_compute[189381]: 2025-11-25 10:32:02.520 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:32:02 compute-0 nova_compute[189381]: 2025-11-25 10:32:02.521 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:02 compute-0 nova_compute[189381]: 2025-11-25 10:32:02.521 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:32:05 compute-0 nova_compute[189381]: 2025-11-25 10:32:05.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:05 compute-0 podman[239099]: 2025-11-25 10:32:05.942083335 +0000 UTC m=+0.057563271 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:32:07 compute-0 nova_compute[189381]: 2025-11-25 10:32:07.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:11 compute-0 podman[239123]: 2025-11-25 10:32:11.962167776 +0000 UTC m=+0.073371243 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 10:32:11 compute-0 podman[239122]: 2025-11-25 10:32:11.990610027 +0000 UTC m=+0.101849425 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:32:13 compute-0 podman[239160]: 2025-11-25 10:32:13.96284564 +0000 UTC m=+0.080357547 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, container_name=kepler, release=1214.1726694543, architecture=x86_64, name=ubi9, version=9.4, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:32:20 compute-0 podman[239181]: 2025-11-25 10:32:20.959017415 +0000 UTC m=+0.077322598 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:32:24 compute-0 podman[239198]: 2025-11-25 10:32:24.943977728 +0000 UTC m=+0.061508475 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, name=ubi9-minimal)
Nov 25 10:32:24 compute-0 podman[239199]: 2025-11-25 10:32:24.964979944 +0000 UTC m=+0.080953307 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:32:26 compute-0 podman[239239]: 2025-11-25 10:32:26.982181756 +0000 UTC m=+0.097223996 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 10:32:29 compute-0 podman[203557]: time="2025-11-25T10:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:32:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:32:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4286 "" "Go-http-client/1.1"
Nov 25 10:32:30 compute-0 podman[239264]: 2025-11-25 10:32:30.953744466 +0000 UTC m=+0.071766972 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 25 10:32:31 compute-0 openstack_network_exporter[205722]: ERROR   10:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:32:31 compute-0 openstack_network_exporter[205722]: ERROR   10:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:32:31 compute-0 openstack_network_exporter[205722]: ERROR   10:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:32:31 compute-0 openstack_network_exporter[205722]: ERROR   10:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:32:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:32:31 compute-0 openstack_network_exporter[205722]: ERROR   10:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:32:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:32:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:32:36.029 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:32:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:32:36.030 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:32:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:32:36.030 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:32:36 compute-0 podman[239284]: 2025-11-25 10:32:36.944113196 +0000 UTC m=+0.060171506 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:32:42 compute-0 podman[239308]: 2025-11-25 10:32:42.947601853 +0000 UTC m=+0.065542051 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 10:32:42 compute-0 podman[239309]: 2025-11-25 10:32:42.964047368 +0000 UTC m=+0.071312838 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 10:32:44 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:32:44.200 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:32:44 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:32:44.201 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:32:44 compute-0 podman[239344]: 2025-11-25 10:32:44.736626664 +0000 UTC m=+0.068104215 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, name=ubi9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Nov 25 10:32:46 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:32:46.203 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:32:51 compute-0 podman[239365]: 2025-11-25 10:32:51.945901614 +0000 UTC m=+0.062021940 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:32:55 compute-0 podman[239383]: 2025-11-25 10:32:55.949734876 +0000 UTC m=+0.067607012 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Nov 25 10:32:55 compute-0 podman[239384]: 2025-11-25 10:32:55.952810284 +0000 UTC m=+0.061490595 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.469 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.469 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.501 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.636 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.637 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.645 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.645 189385 INFO nova.compute.claims [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Claim successful on node compute-0.ctlplane.example.com
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.748 189385 DEBUG nova.compute.provider_tree [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.760 189385 DEBUG nova.scheduler.client.report [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.780 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.781 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.832 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.832 189385 DEBUG nova.network.neutron [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.854 189385 INFO nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.897 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.985 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.987 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.987 189385 INFO nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Creating image(s)
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.988 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.989 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.990 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.990 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "efa46ac01001129056abbd05fc9719c35c46db87" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:32:56 compute-0 nova_compute[189381]: 2025-11-25 10:32:56.991 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:32:57 compute-0 podman[239428]: 2025-11-25 10:32:57.974813144 +0000 UTC m=+0.092025514 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.231 189385 WARNING oslo_policy.policy [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.232 189385 WARNING oslo_policy.policy [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.247 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.368 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.part --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.369 189385 DEBUG nova.virt.images [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] d3f57a9d-2502-43be-9afd-d2b6e1c15c08 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.381 189385 DEBUG nova.privsep.utils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 10:32:58 compute-0 nova_compute[189381]: 2025-11-25 10:32:58.382 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.part /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:32:59 compute-0 nova_compute[189381]: 2025-11-25 10:32:59.090 189385 DEBUG nova.network.neutron [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Successfully created port: b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 10:32:59 compute-0 nova_compute[189381]: 2025-11-25 10:32:59.494 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.part /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.converted" returned: 0 in 1.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:32:59 compute-0 nova_compute[189381]: 2025-11-25 10:32:59.498 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:32:59 compute-0 nova_compute[189381]: 2025-11-25 10:32:59.557 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:32:59 compute-0 nova_compute[189381]: 2025-11-25 10:32:59.558 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:32:59 compute-0 nova_compute[189381]: 2025-11-25 10:32:59.571 189385 INFO oslo.privsep.daemon [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpru61yv13/privsep.sock']
Nov 25 10:32:59 compute-0 podman[203557]: time="2025-11-25T10:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:32:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:32:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4297 "" "Go-http-client/1.1"
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.273 189385 INFO oslo.privsep.daemon [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Spawned new privsep daemon via rootwrap
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.138 239472 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.143 239472 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.145 239472 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.146 239472 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239472
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.358 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.414 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.415 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "efa46ac01001129056abbd05fc9719c35c46db87" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.416 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.427 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.486 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.487 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.556 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk 1073741824" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.558 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.559 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.626 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.628 189385 DEBUG nova.virt.disk.api [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Checking if we can resize image /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.628 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.689 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.691 189385 DEBUG nova.virt.disk.api [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Cannot resize image /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.691 189385 DEBUG nova.objects.instance [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'migration_context' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.710 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.711 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.711 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.712 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.713 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.714 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.746 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.747 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.800 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.802 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.817 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.903 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.906 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.908 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.921 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.976 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:00 compute-0 nova_compute[189381]: 2025-11-25 10:33:00.978 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.135 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 1073741824" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.137 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.138 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.204 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.205 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.205 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Ensure instance console log exists: /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.206 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.206 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.206 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:01 compute-0 openstack_network_exporter[205722]: ERROR   10:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:33:01 compute-0 openstack_network_exporter[205722]: ERROR   10:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:33:01 compute-0 openstack_network_exporter[205722]: ERROR   10:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:33:01 compute-0 openstack_network_exporter[205722]: ERROR   10:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:33:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:33:01 compute-0 openstack_network_exporter[205722]: ERROR   10:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:33:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.499 189385 DEBUG nova.network.neutron [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Successfully updated port: b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.521 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.522 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:33:01 compute-0 nova_compute[189381]: 2025-11-25 10:33:01.522 189385 DEBUG nova.network.neutron [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 10:33:01 compute-0 podman[239505]: 2025-11-25 10:33:01.968170994 +0000 UTC m=+0.075050866 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:33:02 compute-0 nova_compute[189381]: 2025-11-25 10:33:02.035 189385 DEBUG nova.compute.manager [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-changed-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:33:02 compute-0 nova_compute[189381]: 2025-11-25 10:33:02.035 189385 DEBUG nova.compute.manager [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Refreshing instance network info cache due to event network-changed-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:33:02 compute-0 nova_compute[189381]: 2025-11-25 10:33:02.035 189385 DEBUG oslo_concurrency.lockutils [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:33:02 compute-0 nova_compute[189381]: 2025-11-25 10:33:02.280 189385 DEBUG nova.network.neutron [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.042 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.042 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.043 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.043 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.327 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.327 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.340 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:33:03.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.398 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.400 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5608MB free_disk=72.23199081420898GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.400 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.400 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.475 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.476 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.476 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.524 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.557 189385 ERROR nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [req-87e7ca3f-2d3d-4a99-afab-1b7bc3ef1f07] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID a660730c-fa97-4a71-acf8-b1f3eef924ba.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-87e7ca3f-2d3d-4a99-afab-1b7bc3ef1f07"}]}
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.578 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.635 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.636 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.658 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.687 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.731 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.766 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updated inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.767 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.767 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.792 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.793 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.829 189385 DEBUG nova.network.neutron [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.851 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.851 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Instance network_info: |[{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.852 189385 DEBUG oslo_concurrency.lockutils [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.852 189385 DEBUG nova.network.neutron [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Refreshing network info cache for port b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.856 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Start _get_guest_xml network_info=[{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 1, 'device_type': 'disk', 'encrypted': False, 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.862 189385 WARNING nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.871 189385 DEBUG nova.virt.libvirt.host [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.872 189385 DEBUG nova.virt.libvirt.host [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.880 189385 DEBUG nova.virt.libvirt.host [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.881 189385 DEBUG nova.virt.libvirt.host [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.881 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.882 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:31:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8b869036-db8e-4fd3-b57a-e59e272f3c73',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.882 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.882 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.883 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.883 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.883 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.884 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.884 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.884 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.885 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.885 189385 DEBUG nova.virt.hardware [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.890 189385 DEBUG nova.privsep.utils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.891 189385 DEBUG nova.virt.libvirt.vif [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:32:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-axvtrqdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:32:56Z,user_data=None,user_id='af7a147d86064a21a94066f72173bba2',uuid=31174924-a3e8-4662-baad-ac9aa49c01ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.892 189385 DEBUG nova.network.os_vif_util [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.893 189385 DEBUG nova.network.os_vif_util [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:39:09,bridge_name='br-int',has_traffic_filtering=True,id=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cf5c87-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.894 189385 DEBUG nova.objects.instance [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'pci_devices' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.911 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] End _get_guest_xml xml=<domain type="kvm">
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <uuid>31174924-a3e8-4662-baad-ac9aa49c01ab</uuid>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <name>instance-00000001</name>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <memory>524288</memory>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <metadata>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <nova:name>test_0</nova:name>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 10:33:03</nova:creationTime>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <nova:flavor name="m1.small">
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:memory>512</nova:memory>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:ephemeral>1</nova:ephemeral>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:user uuid="af7a147d86064a21a94066f72173bba2">admin</nova:user>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:project uuid="aef0c6ba1dd54218a527ced3f8d2a1be">admin</nova:project>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="d3f57a9d-2502-43be-9afd-d2b6e1c15c08"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         <nova:port uuid="b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0">
Nov 25 10:33:03 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="192.168.0.95" ipVersion="4"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   </metadata>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <system>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <entry name="serial">31174924-a3e8-4662-baad-ac9aa49c01ab</entry>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <entry name="uuid">31174924-a3e8-4662-baad-ac9aa49c01ab</entry>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </system>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <os>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   </os>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <features>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <apic/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   </features>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   </clock>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <target dev="vdb" bus="virtio"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.config"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:f3:39:09"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <target dev="tapb6cf5c87-86"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/console.log" append="off"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </serial>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <video>
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </video>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 10:33:03 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 10:33:03 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 10:33:03 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:33:03 compute-0 nova_compute[189381]: </domain>
Nov 25 10:33:03 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.912 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Preparing to wait for external event network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.913 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.913 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.913 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.913 189385 DEBUG nova.virt.libvirt.vif [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:32:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-axvtrqdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:32:56Z,user_data=None,user_id='af7a147d86064a21a94066f72173bba2',uuid=31174924-a3e8-4662-baad-ac9aa49c01ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.914 189385 DEBUG nova.network.os_vif_util [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.914 189385 DEBUG nova.network.os_vif_util [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:39:09,bridge_name='br-int',has_traffic_filtering=True,id=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cf5c87-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.915 189385 DEBUG os_vif [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:39:09,bridge_name='br-int',has_traffic_filtering=True,id=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cf5c87-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.946 189385 DEBUG ovsdbapp.backend.ovs_idl [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.946 189385 DEBUG ovsdbapp.backend.ovs_idl [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.947 189385 DEBUG ovsdbapp.backend.ovs_idl [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.947 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.948 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.948 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.949 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.952 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.954 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.963 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.963 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.963 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:33:03 compute-0 nova_compute[189381]: 2025-11-25 10:33:03.965 189385 INFO oslo.privsep.daemon [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpzf_q5bbe/privsep.sock']
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.641 189385 INFO oslo.privsep.daemon [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Spawned new privsep daemon via rootwrap
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.520 239530 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.523 239530 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.525 239530 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.525 239530 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239530
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.794 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.794 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.795 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.827 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.827 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.979 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.980 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6cf5c87-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.981 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6cf5c87-86, col_values=(('external_ids', {'iface-id': 'b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:39:09', 'vm-uuid': '31174924-a3e8-4662-baad-ac9aa49c01ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.983 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:04 compute-0 NetworkManager[56317]: <info>  [1764066784.9848] manager: (tapb6cf5c87-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.987 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.992 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:04 compute-0 nova_compute[189381]: 2025-11-25 10:33:04.993 189385 INFO os_vif [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:39:09,bridge_name='br-int',has_traffic_filtering=True,id=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cf5c87-86')
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.114 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.114 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.115 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.115 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No VIF found with MAC fa:16:3e:f3:39:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.116 189385 INFO nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Using config drive
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.397 189385 DEBUG nova.network.neutron [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated VIF entry in instance network info cache for port b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.397 189385 DEBUG nova.network.neutron [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.418 189385 DEBUG oslo_concurrency.lockutils [req-673fb6bc-ae81-4fa9-8bbc-85c5329b8bdb req-b6076436-7983-424b-9f5d-ccb6db4338aa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.772 189385 INFO nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Creating config drive at /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.config
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.777 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8o4bl0ky execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.904 189385 DEBUG oslo_concurrency.processutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8o4bl0ky" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:33:05 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 25 10:33:05 compute-0 kernel: tapb6cf5c87-86: entered promiscuous mode
Nov 25 10:33:05 compute-0 NetworkManager[56317]: <info>  [1764066785.9934] manager: (tapb6cf5c87-86): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.994 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:05 compute-0 ovn_controller[97779]: 2025-11-25T10:33:05Z|00027|binding|INFO|Claiming lport b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 for this chassis.
Nov 25 10:33:05 compute-0 ovn_controller[97779]: 2025-11-25T10:33:05Z|00028|binding|INFO|b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0: Claiming fa:16:3e:f3:39:09 192.168.0.95
Nov 25 10:33:05 compute-0 nova_compute[189381]: 2025-11-25 10:33:05.998 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:06 compute-0 systemd-udevd[239557]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:33:06 compute-0 NetworkManager[56317]: <info>  [1764066786.0421] device (tapb6cf5c87-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:33:06 compute-0 NetworkManager[56317]: <info>  [1764066786.0433] device (tapb6cf5c87-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.066 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:06 compute-0 ovn_controller[97779]: 2025-11-25T10:33:06Z|00029|binding|INFO|Setting lport b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 ovn-installed in OVS
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.075 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:06 compute-0 systemd-machined[155706]: New machine qemu-1-instance-00000001.
Nov 25 10:33:06 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.121 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:39:09 192.168.0.95'], port_security=['fa:16:3e:f3:39:09 192.168.0.95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.95/24', 'neutron:device_id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:33:06 compute-0 ovn_controller[97779]: 2025-11-25T10:33:06Z|00030|binding|INFO|Setting lport b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 up in Southbound
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.122 106634 INFO neutron.agent.ovn.metadata.agent [-] Port b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e bound to our chassis
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.124 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.125 106634 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp0p7knj3l/privsep.sock']
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.527 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764066786.5268552, 31174924-a3e8-4662-baad-ac9aa49c01ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.528 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] VM Started (Lifecycle Event)
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.572 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.579 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764066786.5269885, 31174924-a3e8-4662-baad-ac9aa49c01ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.579 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] VM Paused (Lifecycle Event)
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.602 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.607 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.625 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.829 106634 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.830 106634 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp0p7knj3l/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.704 239582 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.709 239582 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.711 239582 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.712 239582 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239582
Nov 25 10:33:06 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:06.833 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[886a72ca-4895-4d34-8eef-7b02d6c83a41]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:06 compute-0 nova_compute[189381]: 2025-11-25 10:33:06.982 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:07 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 10:33:07 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 10:33:07 compute-0 podman[239587]: 2025-11-25 10:33:07.252899543 +0000 UTC m=+0.076215139 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.368 239582 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.368 239582 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.368 239582 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.954 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7535d418-51e4-4aad-8a11-640d15ba1aa8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.955 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35870011-21 in ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.957 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35870011-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.957 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[39d7d643-1294-4f2e-92f2-16ca04ebc30a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.960 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6540c984-5716-4e94-82df-392bb1cecc67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:07 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:07.981 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[85dd4f5b-eb4c-4894-ae45-7c42fa802c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.084 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8d08433e-6c3e-4411-b017-957971a17979]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.087 106634 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpae9mkyia/privsep.sock']
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.722 106634 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.723 106634 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpae9mkyia/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.611 239638 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.615 239638 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.616 239638 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.617 239638 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239638
Nov 25 10:33:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:08.727 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[ee1d18e3-3db2-4ad9-bcfc-302046ee0507]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.919 189385 DEBUG nova.compute.manager [req-2a83fc7c-ab8b-499b-963c-907de8ff2ad1 req-ff9ee59e-6087-4ead-b1e2-821032dfd0ed d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.919 189385 DEBUG oslo_concurrency.lockutils [req-2a83fc7c-ab8b-499b-963c-907de8ff2ad1 req-ff9ee59e-6087-4ead-b1e2-821032dfd0ed d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.920 189385 DEBUG oslo_concurrency.lockutils [req-2a83fc7c-ab8b-499b-963c-907de8ff2ad1 req-ff9ee59e-6087-4ead-b1e2-821032dfd0ed d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.920 189385 DEBUG oslo_concurrency.lockutils [req-2a83fc7c-ab8b-499b-963c-907de8ff2ad1 req-ff9ee59e-6087-4ead-b1e2-821032dfd0ed d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.920 189385 DEBUG nova.compute.manager [req-2a83fc7c-ab8b-499b-963c-907de8ff2ad1 req-ff9ee59e-6087-4ead-b1e2-821032dfd0ed d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Processing event network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.921 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.926 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764066788.9258065, 31174924-a3e8-4662-baad-ac9aa49c01ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.926 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] VM Resumed (Lifecycle Event)
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.928 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.935 189385 INFO nova.virt.libvirt.driver [-] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Instance spawned successfully.
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.935 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.975 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:33:08 compute-0 nova_compute[189381]: 2025-11-25 10:33:08.986 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.000 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.000 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.001 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.001 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.002 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.003 189385 DEBUG nova.virt.libvirt.driver [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.006 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.216 189385 INFO nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Took 12.23 seconds to spawn the instance on the hypervisor.
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.217 189385 DEBUG nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.247 239638 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.247 239638 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.247 239638 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.342 189385 INFO nova.compute.manager [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Took 12.74 seconds to build instance.
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.375 189385 DEBUG oslo_concurrency.lockutils [None req-d8f83de9-5676-4097-b067-68d0bbcf2e8d af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.840 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[7ebcf865-93ee-4cb7-9388-fe3224e33b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:09 compute-0 NetworkManager[56317]: <info>  [1764066789.8733] manager: (tap35870011-20): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.872 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[130a2baf-5ee6-4d51-b2d9-20e9896f3630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:09 compute-0 systemd-udevd[239650]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.915 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b0f630-8526-46d2-8167-c29685780b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.919 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[f48c4574-5219-4e99-8ccd-4e4193db278e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:09 compute-0 NetworkManager[56317]: <info>  [1764066789.9443] device (tap35870011-20): carrier: link connected
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.949 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[a09c340e-3a2a-4b3d-bfe7-a2eea2b119d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.966 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[814eb7f9-aba6-4b88-b161-97e5fff5b389]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 36390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239668, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:09.981 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8e4edd-a1a0-4599-b8f8-1866386c4273]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:642e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369752, 'tstamp': 369752}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239669, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:09 compute-0 nova_compute[189381]: 2025-11-25 10:33:09.983 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.005 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[891a26a7-f71f-4841-859c-5a1cf2396678]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 36390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239670, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.034 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[4463f480-5541-4981-af8c-62a612c3f89f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.092 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0774c448-6b21-4834-9da0-03c1b6b5d3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.095 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.096 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.097 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35870011-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:33:10 compute-0 nova_compute[189381]: 2025-11-25 10:33:10.099 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:10 compute-0 NetworkManager[56317]: <info>  [1764066790.0996] manager: (tap35870011-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 25 10:33:10 compute-0 kernel: tap35870011-20: entered promiscuous mode
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.101 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35870011-20, col_values=(('external_ids', {'iface-id': '20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:33:10 compute-0 ovn_controller[97779]: 2025-11-25T10:33:10Z|00031|binding|INFO|Releasing lport 20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac from this chassis (sb_readonly=0)
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.145 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35870011-2c24-4719-a9ee-4942cd8ed50e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35870011-2c24-4719-a9ee-4942cd8ed50e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 10:33:10 compute-0 nova_compute[189381]: 2025-11-25 10:33:10.144 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.147 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2e1fde1f-1762-4c38-af5c-8e97c8063e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.148 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: global
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/35870011-2c24-4719-a9ee-4942cd8ed50e.pid.haproxy
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 10:33:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:10.149 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'env', 'PROCESS_TAG=haproxy-35870011-2c24-4719-a9ee-4942cd8ed50e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35870011-2c24-4719-a9ee-4942cd8ed50e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 10:33:10 compute-0 nova_compute[189381]: 2025-11-25 10:33:10.147 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:10 compute-0 podman[239702]: 2025-11-25 10:33:10.527746822 +0000 UTC m=+0.030210002 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 10:33:10 compute-0 podman[239702]: 2025-11-25 10:33:10.848962895 +0000 UTC m=+0.351426055 container create b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:33:10 compute-0 systemd[1]: Started libpod-conmon-b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2.scope.
Nov 25 10:33:11 compute-0 systemd[1]: Started libcrun container.
Nov 25 10:33:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7129039e231734bb2f6a7eb42e41cad78263e5272e33d43eb5afef027963ecd1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 10:33:11 compute-0 podman[239702]: 2025-11-25 10:33:11.078230087 +0000 UTC m=+0.580693277 container init b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:33:11 compute-0 podman[239702]: 2025-11-25 10:33:11.087180185 +0000 UTC m=+0.589643335 container start b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:33:11 compute-0 nova_compute[189381]: 2025-11-25 10:33:11.124 189385 DEBUG nova.compute.manager [req-5441db17-0a4f-4ad0-8c4b-e74d56da8ccb req-b6e85084-7937-42f5-a0a5-342cf03d1314 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:33:11 compute-0 nova_compute[189381]: 2025-11-25 10:33:11.124 189385 DEBUG oslo_concurrency.lockutils [req-5441db17-0a4f-4ad0-8c4b-e74d56da8ccb req-b6e85084-7937-42f5-a0a5-342cf03d1314 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:11 compute-0 nova_compute[189381]: 2025-11-25 10:33:11.125 189385 DEBUG oslo_concurrency.lockutils [req-5441db17-0a4f-4ad0-8c4b-e74d56da8ccb req-b6e85084-7937-42f5-a0a5-342cf03d1314 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:11 compute-0 nova_compute[189381]: 2025-11-25 10:33:11.125 189385 DEBUG oslo_concurrency.lockutils [req-5441db17-0a4f-4ad0-8c4b-e74d56da8ccb req-b6e85084-7937-42f5-a0a5-342cf03d1314 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:11 compute-0 nova_compute[189381]: 2025-11-25 10:33:11.125 189385 DEBUG nova.compute.manager [req-5441db17-0a4f-4ad0-8c4b-e74d56da8ccb req-b6e85084-7937-42f5-a0a5-342cf03d1314 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] No waiting events found dispatching network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:33:11 compute-0 nova_compute[189381]: 2025-11-25 10:33:11.126 189385 WARNING nova.compute.manager [req-5441db17-0a4f-4ad0-8c4b-e74d56da8ccb req-b6e85084-7937-42f5-a0a5-342cf03d1314 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received unexpected event network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 for instance with vm_state active and task_state None.
Nov 25 10:33:11 compute-0 neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e[239716]: [NOTICE]   (239722) : New worker (239724) forked
Nov 25 10:33:11 compute-0 neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e[239716]: [NOTICE]   (239722) : Loading success.
Nov 25 10:33:11 compute-0 nova_compute[189381]: 2025-11-25 10:33:11.984 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:13 compute-0 podman[239734]: 2025-11-25 10:33:13.972995106 +0000 UTC m=+0.088783612 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Nov 25 10:33:13 compute-0 podman[239733]: 2025-11-25 10:33:13.98147031 +0000 UTC m=+0.096369120 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 10:33:14 compute-0 podman[239772]: 2025-11-25 10:33:14.957237899 +0000 UTC m=+0.067892379 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, release-0.7.12=, config_id=edpm, name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler)
Nov 25 10:33:14 compute-0 nova_compute[189381]: 2025-11-25 10:33:14.987 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:16 compute-0 nova_compute[189381]: 2025-11-25 10:33:16.986 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:19 compute-0 nova_compute[189381]: 2025-11-25 10:33:19.990 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:21 compute-0 nova_compute[189381]: 2025-11-25 10:33:21.989 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:22 compute-0 podman[239794]: 2025-11-25 10:33:22.942060924 +0000 UTC m=+0.058851268 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3166] manager: (patch-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3175] device (patch-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3202] manager: (patch-br-int-to-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3222] device (patch-br-int-to-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.322 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3265] manager: (patch-br-int-to-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3293] manager: (patch-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 25 10:33:24 compute-0 ovn_controller[97779]: 2025-11-25T10:33:24Z|00032|binding|INFO|Releasing lport 20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac from this chassis (sb_readonly=0)
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3357] device (patch-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 10:33:24 compute-0 NetworkManager[56317]: <info>  [1764066804.3360] device (patch-br-int-to-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 25 10:33:24 compute-0 ovn_controller[97779]: 2025-11-25T10:33:24Z|00033|binding|INFO|Releasing lport 20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac from this chassis (sb_readonly=0)
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.350 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.359 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.841 189385 DEBUG nova.compute.manager [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-changed-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.841 189385 DEBUG nova.compute.manager [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Refreshing instance network info cache due to event network-changed-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.841 189385 DEBUG oslo_concurrency.lockutils [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.842 189385 DEBUG oslo_concurrency.lockutils [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.842 189385 DEBUG nova.network.neutron [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Refreshing network info cache for port b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:33:24 compute-0 nova_compute[189381]: 2025-11-25 10:33:24.993 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:26 compute-0 nova_compute[189381]: 2025-11-25 10:33:26.765 189385 DEBUG nova.network.neutron [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated VIF entry in instance network info cache for port b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:33:26 compute-0 nova_compute[189381]: 2025-11-25 10:33:26.766 189385 DEBUG nova.network.neutron [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:33:26 compute-0 nova_compute[189381]: 2025-11-25 10:33:26.791 189385 DEBUG oslo_concurrency.lockutils [req-4026aa6f-1646-42e6-9494-e2cf883b6c4a req-8c0ed5ac-fbeb-4976-a8aa-d2a3b3b5e740 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:33:26 compute-0 podman[239815]: 2025-11-25 10:33:26.949699685 +0000 UTC m=+0.053932226 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:33:26 compute-0 podman[239814]: 2025-11-25 10:33:26.981381059 +0000 UTC m=+0.087253057 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter)
Nov 25 10:33:26 compute-0 nova_compute[189381]: 2025-11-25 10:33:26.991 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:28 compute-0 podman[239855]: 2025-11-25 10:33:28.996123759 +0000 UTC m=+0.110282901 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:33:29 compute-0 podman[203557]: time="2025-11-25T10:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:33:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:33:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4765 "" "Go-http-client/1.1"
Nov 25 10:33:29 compute-0 nova_compute[189381]: 2025-11-25 10:33:29.996 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:31 compute-0 openstack_network_exporter[205722]: ERROR   10:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:33:31 compute-0 openstack_network_exporter[205722]: ERROR   10:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:33:31 compute-0 openstack_network_exporter[205722]: ERROR   10:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:33:31 compute-0 openstack_network_exporter[205722]: ERROR   10:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:33:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:33:31 compute-0 openstack_network_exporter[205722]: ERROR   10:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:33:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:33:31 compute-0 nova_compute[189381]: 2025-11-25 10:33:31.997 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:32 compute-0 podman[239881]: 2025-11-25 10:33:32.963836217 +0000 UTC m=+0.073594388 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 10:33:35 compute-0 nova_compute[189381]: 2025-11-25 10:33:35.000 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:36.032 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:33:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:36.032 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:33:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:33:36.033 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:33:37 compute-0 nova_compute[189381]: 2025-11-25 10:33:37.000 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:37 compute-0 podman[239902]: 2025-11-25 10:33:37.963815908 +0000 UTC m=+0.081993129 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:33:40 compute-0 nova_compute[189381]: 2025-11-25 10:33:40.005 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:42 compute-0 nova_compute[189381]: 2025-11-25 10:33:42.003 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:44 compute-0 podman[239926]: 2025-11-25 10:33:44.756740087 +0000 UTC m=+0.076847682 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 10:33:44 compute-0 podman[239927]: 2025-11-25 10:33:44.767106895 +0000 UTC m=+0.083791931 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:33:45 compute-0 nova_compute[189381]: 2025-11-25 10:33:45.009 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:45 compute-0 podman[239965]: 2025-11-25 10:33:45.960322719 +0000 UTC m=+0.070034826 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.component=ubi9-container)
Nov 25 10:33:47 compute-0 nova_compute[189381]: 2025-11-25 10:33:47.005 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:47 compute-0 ovn_controller[97779]: 2025-11-25T10:33:47Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f3:39:09 192.168.0.95
Nov 25 10:33:47 compute-0 ovn_controller[97779]: 2025-11-25T10:33:47Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f3:39:09 192.168.0.95
Nov 25 10:33:50 compute-0 nova_compute[189381]: 2025-11-25 10:33:50.014 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:52 compute-0 nova_compute[189381]: 2025-11-25 10:33:52.008 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:53 compute-0 sshd-session[239993]: Connection closed by authenticating user root 171.244.51.45 port 56800 [preauth]
Nov 25 10:33:53 compute-0 podman[239995]: 2025-11-25 10:33:53.945631077 +0000 UTC m=+0.058587856 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 10:33:54 compute-0 ovn_controller[97779]: 2025-11-25T10:33:54Z|00034|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Nov 25 10:33:55 compute-0 nova_compute[189381]: 2025-11-25 10:33:55.019 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:56 compute-0 nova_compute[189381]: 2025-11-25 10:33:56.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:33:56 compute-0 nova_compute[189381]: 2025-11-25 10:33:56.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:33:57 compute-0 nova_compute[189381]: 2025-11-25 10:33:57.010 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:33:57 compute-0 podman[240015]: 2025-11-25 10:33:57.990827235 +0000 UTC m=+0.059733539 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:33:58 compute-0 podman[240014]: 2025-11-25 10:33:58.000289277 +0000 UTC m=+0.072455865 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 25 10:33:59 compute-0 podman[203557]: time="2025-11-25T10:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:33:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:33:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Nov 25 10:33:59 compute-0 podman[240055]: 2025-11-25 10:33:59.975904939 +0000 UTC m=+0.095278222 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:34:00 compute-0 nova_compute[189381]: 2025-11-25 10:34:00.021 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:00 compute-0 nova_compute[189381]: 2025-11-25 10:34:00.188 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:01 compute-0 openstack_network_exporter[205722]: ERROR   10:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:34:01 compute-0 openstack_network_exporter[205722]: ERROR   10:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:34:01 compute-0 openstack_network_exporter[205722]: ERROR   10:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:34:01 compute-0 openstack_network_exporter[205722]: ERROR   10:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:34:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:34:01 compute-0 openstack_network_exporter[205722]: ERROR   10:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:34:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:34:02 compute-0 nova_compute[189381]: 2025-11-25 10:34:02.012 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:03 compute-0 nova_compute[189381]: 2025-11-25 10:34:03.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:03 compute-0 podman[240081]: 2025-11-25 10:34:03.972947471 +0000 UTC m=+0.078140169 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.059 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.145 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.206 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.207 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.266 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.267 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.337 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.338 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.397 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.766 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.768 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5248MB free_disk=72.20839309692383GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.768 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:04 compute-0 nova_compute[189381]: 2025-11-25 10:34:04.769 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.026 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.069 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.070 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.070 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.216 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.230 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.254 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.255 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.255 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.255 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:34:05 compute-0 nova_compute[189381]: 2025-11-25 10:34:05.267 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:34:06 compute-0 nova_compute[189381]: 2025-11-25 10:34:06.262 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:06 compute-0 nova_compute[189381]: 2025-11-25 10:34:06.262 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:06 compute-0 nova_compute[189381]: 2025-11-25 10:34:06.263 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:34:06 compute-0 nova_compute[189381]: 2025-11-25 10:34:06.263 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:34:07 compute-0 nova_compute[189381]: 2025-11-25 10:34:07.015 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:07 compute-0 nova_compute[189381]: 2025-11-25 10:34:07.258 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:34:07 compute-0 nova_compute[189381]: 2025-11-25 10:34:07.258 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:34:07 compute-0 nova_compute[189381]: 2025-11-25 10:34:07.259 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:34:07 compute-0 nova_compute[189381]: 2025-11-25 10:34:07.259 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:34:08 compute-0 podman[240114]: 2025-11-25 10:34:08.986199058 +0000 UTC m=+0.097118865 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:34:09 compute-0 nova_compute[189381]: 2025-11-25 10:34:09.980 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:34:09 compute-0 nova_compute[189381]: 2025-11-25 10:34:09.994 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:34:09 compute-0 nova_compute[189381]: 2025-11-25 10:34:09.995 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:34:09 compute-0 nova_compute[189381]: 2025-11-25 10:34:09.996 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:09 compute-0 nova_compute[189381]: 2025-11-25 10:34:09.996 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:10 compute-0 nova_compute[189381]: 2025-11-25 10:34:10.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:10 compute-0 nova_compute[189381]: 2025-11-25 10:34:10.031 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:10 compute-0 nova_compute[189381]: 2025-11-25 10:34:10.039 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:12 compute-0 nova_compute[189381]: 2025-11-25 10:34:12.020 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:12 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:12.087 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:34:12 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:12.088 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:34:12 compute-0 nova_compute[189381]: 2025-11-25 10:34:12.091 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:14 compute-0 podman[240138]: 2025-11-25 10:34:14.973617816 +0000 UTC m=+0.070522939 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Nov 25 10:34:15 compute-0 podman[240137]: 2025-11-25 10:34:15.010030104 +0000 UTC m=+0.111119648 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 25 10:34:15 compute-0 nova_compute[189381]: 2025-11-25 10:34:15.037 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:16 compute-0 podman[240174]: 2025-11-25 10:34:16.973954469 +0000 UTC m=+0.085518621 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.buildah.version=1.29.0)
Nov 25 10:34:17 compute-0 nova_compute[189381]: 2025-11-25 10:34:17.023 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.222 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.222 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.237 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.359 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.360 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.370 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.370 189385 INFO nova.compute.claims [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Claim successful on node compute-0.ctlplane.example.com
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.513 189385 DEBUG nova.compute.provider_tree [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.525 189385 DEBUG nova.scheduler.client.report [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.549 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.551 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.607 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.608 189385 DEBUG nova.network.neutron [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.635 189385 INFO nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.684 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.779 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.781 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.782 189385 INFO nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Creating image(s)
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.782 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.783 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.784 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.800 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.869 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.870 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "efa46ac01001129056abbd05fc9719c35c46db87" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.871 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.882 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.937 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.938 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.976 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.977 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:19 compute-0 nova_compute[189381]: 2025-11-25 10:34:19.978 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.033 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.035 189385 DEBUG nova.virt.disk.api [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Checking if we can resize image /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.035 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.050 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.095 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.096 189385 DEBUG nova.virt.disk.api [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Cannot resize image /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.097 189385 DEBUG nova.objects.instance [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'migration_context' on Instance uuid 44e7d3d0-d059-412e-a1a9-467d774d2bee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.109 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.109 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.110 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.123 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.187 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.188 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.189 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.200 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.264 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.265 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.458 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 1073741824" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.459 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.460 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.530 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.531 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.532 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Ensure instance console log exists: /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.532 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.533 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:20 compute-0 nova_compute[189381]: 2025-11-25 10:34:20.533 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:21 compute-0 nova_compute[189381]: 2025-11-25 10:34:21.486 189385 DEBUG nova.network.neutron [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Successfully updated port: c7376e3d-2069-45b2-a63a-2eefc475ad2b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 10:34:21 compute-0 nova_compute[189381]: 2025-11-25 10:34:21.522 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:34:21 compute-0 nova_compute[189381]: 2025-11-25 10:34:21.522 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:34:21 compute-0 nova_compute[189381]: 2025-11-25 10:34:21.523 189385 DEBUG nova.network.neutron [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 10:34:21 compute-0 nova_compute[189381]: 2025-11-25 10:34:21.938 189385 DEBUG nova.compute.manager [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received event network-changed-c7376e3d-2069-45b2-a63a-2eefc475ad2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:34:21 compute-0 nova_compute[189381]: 2025-11-25 10:34:21.940 189385 DEBUG nova.compute.manager [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Refreshing instance network info cache due to event network-changed-c7376e3d-2069-45b2-a63a-2eefc475ad2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:34:21 compute-0 nova_compute[189381]: 2025-11-25 10:34:21.940 189385 DEBUG oslo_concurrency.lockutils [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:34:22 compute-0 nova_compute[189381]: 2025-11-25 10:34:22.026 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:22.090 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:34:22 compute-0 nova_compute[189381]: 2025-11-25 10:34:22.096 189385 DEBUG nova.network.neutron [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.281 189385 DEBUG nova.network.neutron [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.296 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.297 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Instance network_info: |[{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.297 189385 DEBUG oslo_concurrency.lockutils [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.298 189385 DEBUG nova.network.neutron [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Refreshing network info cache for port c7376e3d-2069-45b2-a63a-2eefc475ad2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.301 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Start _get_guest_xml network_info=[{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 1, 'device_type': 'disk', 'encrypted': False, 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.307 189385 WARNING nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.313 189385 DEBUG nova.virt.libvirt.host [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.314 189385 DEBUG nova.virt.libvirt.host [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.319 189385 DEBUG nova.virt.libvirt.host [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.319 189385 DEBUG nova.virt.libvirt.host [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.319 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.320 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:31:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8b869036-db8e-4fd3-b57a-e59e272f3c73',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.320 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.321 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.321 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.321 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.321 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.322 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.322 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.322 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.323 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.323 189385 DEBUG nova.virt.hardware [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.326 189385 DEBUG nova.virt.libvirt.vif [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:34:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng',id=2,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-ske9c4nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:34:19Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzgzNDk1NTY2OTA3MDIwMDg5Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 25 10:34:23 compute-0 nova_compute[189381]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzgzNDk1NTY2OTA3MDIwMDg5Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=44e7d3d0-d059-412e-a1a9-467d774d2bee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.327 189385 DEBUG nova.network.os_vif_util [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.328 189385 DEBUG nova.network.os_vif_util [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:45:ac,bridge_name='br-int',has_traffic_filtering=True,id=c7376e3d-2069-45b2-a63a-2eefc475ad2b,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7376e3d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.328 189385 DEBUG nova.objects.instance [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'pci_devices' on Instance uuid 44e7d3d0-d059-412e-a1a9-467d774d2bee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.353 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] End _get_guest_xml xml=<domain type="kvm">
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <uuid>44e7d3d0-d059-412e-a1a9-467d774d2bee</uuid>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <name>instance-00000002</name>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <memory>524288</memory>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <metadata>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <nova:name>vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng</nova:name>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 10:34:23</nova:creationTime>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <nova:flavor name="m1.small">
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:memory>512</nova:memory>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:ephemeral>1</nova:ephemeral>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:user uuid="af7a147d86064a21a94066f72173bba2">admin</nova:user>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:project uuid="aef0c6ba1dd54218a527ced3f8d2a1be">admin</nova:project>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="d3f57a9d-2502-43be-9afd-d2b6e1c15c08"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         <nova:port uuid="c7376e3d-2069-45b2-a63a-2eefc475ad2b">
Nov 25 10:34:23 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="192.168.0.71" ipVersion="4"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   </metadata>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <system>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <entry name="serial">44e7d3d0-d059-412e-a1a9-467d774d2bee</entry>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <entry name="uuid">44e7d3d0-d059-412e-a1a9-467d774d2bee</entry>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </system>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <os>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   </os>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <features>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <apic/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   </features>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   </clock>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <target dev="vdb" bus="virtio"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.config"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:ab:45:ac"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <target dev="tapc7376e3d-20"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/console.log" append="off"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </serial>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <video>
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </video>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 10:34:23 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 10:34:23 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 10:34:23 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:34:23 compute-0 nova_compute[189381]: </domain>
Nov 25 10:34:23 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.354 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Preparing to wait for external event network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.354 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.355 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.355 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.356 189385 DEBUG nova.virt.libvirt.vif [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:34:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng',id=2,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-ske9c4nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:34:19Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzgzNDk1NTY2OTA3MDIwMDg5Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 25 10:34:23 compute-0 nova_compute[189381]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzgzNDk1NTY2OTA3MDIwMDg5Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=44e7d3d0-d059-412e-a1a9-467d774d2bee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.356 189385 DEBUG nova.network.os_vif_util [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.357 189385 DEBUG nova.network.os_vif_util [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:45:ac,bridge_name='br-int',has_traffic_filtering=True,id=c7376e3d-2069-45b2-a63a-2eefc475ad2b,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7376e3d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.357 189385 DEBUG os_vif [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:45:ac,bridge_name='br-int',has_traffic_filtering=True,id=c7376e3d-2069-45b2-a63a-2eefc475ad2b,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7376e3d-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.358 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.358 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.359 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.363 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.363 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc7376e3d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.364 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc7376e3d-20, col_values=(('external_ids', {'iface-id': 'c7376e3d-2069-45b2-a63a-2eefc475ad2b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:45:ac', 'vm-uuid': '44e7d3d0-d059-412e-a1a9-467d774d2bee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.366 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:23 compute-0 NetworkManager[56317]: <info>  [1764066863.3682] manager: (tapc7376e3d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.369 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.375 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.376 189385 INFO os_vif [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:45:ac,bridge_name='br-int',has_traffic_filtering=True,id=c7376e3d-2069-45b2-a63a-2eefc475ad2b,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7376e3d-20')
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.576 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.577 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.577 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.577 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No VIF found with MAC fa:16:3e:ab:45:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 10:34:23 compute-0 nova_compute[189381]: 2025-11-25 10:34:23.578 189385 INFO nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Using config drive
Nov 25 10:34:23 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:34:23.326 189385 DEBUG nova.virt.libvirt.vif [None req-cfaaf372-23 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:34:23 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:34:23.356 189385 DEBUG nova.virt.libvirt.vif [None req-cfaaf372-23 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.287 189385 INFO nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Creating config drive at /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.config
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.292 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppeaywie_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.429 189385 DEBUG oslo_concurrency.processutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppeaywie_" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:34:24 compute-0 kernel: tapc7376e3d-20: entered promiscuous mode
Nov 25 10:34:24 compute-0 NetworkManager[56317]: <info>  [1764066864.5404] manager: (tapc7376e3d-20): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.545 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:24 compute-0 ovn_controller[97779]: 2025-11-25T10:34:24Z|00035|binding|INFO|Claiming lport c7376e3d-2069-45b2-a63a-2eefc475ad2b for this chassis.
Nov 25 10:34:24 compute-0 ovn_controller[97779]: 2025-11-25T10:34:24Z|00036|binding|INFO|c7376e3d-2069-45b2-a63a-2eefc475ad2b: Claiming fa:16:3e:ab:45:ac 192.168.0.71
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.564 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:45:ac 192.168.0.71'], port_security=['fa:16:3e:ab:45:ac 192.168.0.71'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-6oeui4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-port-clymc3k5eg3x', 'neutron:cidrs': '192.168.0.71/24', 'neutron:device_id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-6oeui4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-port-clymc3k5eg3x', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=c7376e3d-2069-45b2-a63a-2eefc475ad2b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.566 106634 INFO neutron.agent.ovn.metadata.agent [-] Port c7376e3d-2069-45b2-a63a-2eefc475ad2b in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e bound to our chassis
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.569 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:34:24 compute-0 ovn_controller[97779]: 2025-11-25T10:34:24Z|00037|binding|INFO|Setting lport c7376e3d-2069-45b2-a63a-2eefc475ad2b ovn-installed in OVS
Nov 25 10:34:24 compute-0 ovn_controller[97779]: 2025-11-25T10:34:24Z|00038|binding|INFO|Setting lport c7376e3d-2069-45b2-a63a-2eefc475ad2b up in Southbound
Nov 25 10:34:24 compute-0 systemd-udevd[240252]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.583 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.584 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.589 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[04693cb8-18cf-4c3c-8e22-09985e107607]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:34:24 compute-0 NetworkManager[56317]: <info>  [1764066864.6036] device (tapc7376e3d-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:34:24 compute-0 NetworkManager[56317]: <info>  [1764066864.6043] device (tapc7376e3d-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 10:34:24 compute-0 systemd-machined[155706]: New machine qemu-2-instance-00000002.
Nov 25 10:34:24 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.623 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[8f43ade8-46f2-46fb-9cfb-20b11e65e3bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.628 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[756ec9de-d305-4c9c-8309-2dd540c1f7fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:34:24 compute-0 podman[240233]: 2025-11-25 10:34:24.649804977 +0000 UTC m=+0.116389029 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.662 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[0f606af4-afd7-41cd-847d-d7f16a7d7912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.678 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[829e72d6-76ec-41e5-aa66-1b0dfebb5d80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 36390, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240267, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.695 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a3e54e-a099-4b76-b583-4df56b2e084e]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369763, 'tstamp': 369763}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240273, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369766, 'tstamp': 369766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240273, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.697 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.699 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:24 compute-0 nova_compute[189381]: 2025-11-25 10:34:24.700 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.701 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35870011-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.701 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.702 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35870011-20, col_values=(('external_ids', {'iface-id': '20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:34:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:24.702 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.121 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764066865.1212976, 44e7d3d0-d059-412e-a1a9-467d774d2bee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.122 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] VM Started (Lifecycle Event)
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.139 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.145 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764066865.121434, 44e7d3d0-d059-412e-a1a9-467d774d2bee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.146 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] VM Paused (Lifecycle Event)
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.169 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.176 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.192 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.347 189385 DEBUG nova.network.neutron [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updated VIF entry in instance network info cache for port c7376e3d-2069-45b2-a63a-2eefc475ad2b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.348 189385 DEBUG nova.network.neutron [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.364 189385 DEBUG oslo_concurrency.lockutils [req-26f3ecf5-3ac9-48c2-bfe0-f8bc67c33de4 req-20fcdabb-92de-4646-9821-a62508178ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.396 189385 DEBUG nova.compute.manager [req-274477ab-c31c-4138-8417-7e3890c45312 req-4af79741-bc8b-4c56-be0e-ba73a19d0fbd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received event network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.397 189385 DEBUG oslo_concurrency.lockutils [req-274477ab-c31c-4138-8417-7e3890c45312 req-4af79741-bc8b-4c56-be0e-ba73a19d0fbd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.398 189385 DEBUG oslo_concurrency.lockutils [req-274477ab-c31c-4138-8417-7e3890c45312 req-4af79741-bc8b-4c56-be0e-ba73a19d0fbd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.398 189385 DEBUG oslo_concurrency.lockutils [req-274477ab-c31c-4138-8417-7e3890c45312 req-4af79741-bc8b-4c56-be0e-ba73a19d0fbd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.399 189385 DEBUG nova.compute.manager [req-274477ab-c31c-4138-8417-7e3890c45312 req-4af79741-bc8b-4c56-be0e-ba73a19d0fbd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Processing event network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.401 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.413 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764066865.412684, 44e7d3d0-d059-412e-a1a9-467d774d2bee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.413 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] VM Resumed (Lifecycle Event)
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.416 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.424 189385 INFO nova.virt.libvirt.driver [-] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Instance spawned successfully.
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.424 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.444 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.452 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.457 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.457 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.458 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.458 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.459 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.459 189385 DEBUG nova.virt.libvirt.driver [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.486 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.576 189385 INFO nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Took 5.80 seconds to spawn the instance on the hypervisor.
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.577 189385 DEBUG nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.641 189385 INFO nova.compute.manager [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Took 6.31 seconds to build instance.
Nov 25 10:34:25 compute-0 nova_compute[189381]: 2025-11-25 10:34:25.663 189385 DEBUG oslo_concurrency.lockutils [None req-cfaaf372-23fb-401b-a457-fd19576e882e af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:27 compute-0 nova_compute[189381]: 2025-11-25 10:34:27.029 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:27 compute-0 nova_compute[189381]: 2025-11-25 10:34:27.499 189385 DEBUG nova.compute.manager [req-4616a4a3-f879-431f-9967-9d1992f7d2bb req-43f2989b-c5b0-49c7-ba26-2b3c655edd7b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received event network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:34:27 compute-0 nova_compute[189381]: 2025-11-25 10:34:27.501 189385 DEBUG oslo_concurrency.lockutils [req-4616a4a3-f879-431f-9967-9d1992f7d2bb req-43f2989b-c5b0-49c7-ba26-2b3c655edd7b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:27 compute-0 nova_compute[189381]: 2025-11-25 10:34:27.502 189385 DEBUG oslo_concurrency.lockutils [req-4616a4a3-f879-431f-9967-9d1992f7d2bb req-43f2989b-c5b0-49c7-ba26-2b3c655edd7b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:27 compute-0 nova_compute[189381]: 2025-11-25 10:34:27.503 189385 DEBUG oslo_concurrency.lockutils [req-4616a4a3-f879-431f-9967-9d1992f7d2bb req-43f2989b-c5b0-49c7-ba26-2b3c655edd7b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:27 compute-0 nova_compute[189381]: 2025-11-25 10:34:27.503 189385 DEBUG nova.compute.manager [req-4616a4a3-f879-431f-9967-9d1992f7d2bb req-43f2989b-c5b0-49c7-ba26-2b3c655edd7b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] No waiting events found dispatching network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:34:27 compute-0 nova_compute[189381]: 2025-11-25 10:34:27.504 189385 WARNING nova.compute.manager [req-4616a4a3-f879-431f-9967-9d1992f7d2bb req-43f2989b-c5b0-49c7-ba26-2b3c655edd7b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received unexpected event network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b for instance with vm_state active and task_state None.
Nov 25 10:34:28 compute-0 nova_compute[189381]: 2025-11-25 10:34:28.367 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:28 compute-0 podman[240284]: 2025-11-25 10:34:28.964131855 +0000 UTC m=+0.066550715 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:34:28 compute-0 podman[240283]: 2025-11-25 10:34:28.979151087 +0000 UTC m=+0.084386938 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 10:34:29 compute-0 podman[203557]: time="2025-11-25T10:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:34:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:34:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Nov 25 10:34:31 compute-0 podman[240323]: 2025-11-25 10:34:31.027956554 +0000 UTC m=+0.133903033 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:34:31 compute-0 openstack_network_exporter[205722]: ERROR   10:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:34:31 compute-0 openstack_network_exporter[205722]: ERROR   10:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:34:31 compute-0 openstack_network_exporter[205722]: ERROR   10:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:34:31 compute-0 openstack_network_exporter[205722]: ERROR   10:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:34:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:34:31 compute-0 openstack_network_exporter[205722]: ERROR   10:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:34:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:34:32 compute-0 nova_compute[189381]: 2025-11-25 10:34:32.031 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.228 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.249 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 31174924-a3e8-4662-baad-ac9aa49c01ab _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.250 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 44e7d3d0-d059-412e-a1a9-467d774d2bee _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.251 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.251 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.252 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.252 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.310 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.321 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:33 compute-0 nova_compute[189381]: 2025-11-25 10:34:33.370 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:34 compute-0 podman[240349]: 2025-11-25 10:34:34.970110116 +0000 UTC m=+0.078120779 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 10:34:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:36.033 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:34:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:36.034 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:34:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:34:36.034 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:34:37 compute-0 nova_compute[189381]: 2025-11-25 10:34:37.033 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:38 compute-0 nova_compute[189381]: 2025-11-25 10:34:38.374 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:39 compute-0 podman[240368]: 2025-11-25 10:34:39.942988859 +0000 UTC m=+0.059720259 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:34:42 compute-0 nova_compute[189381]: 2025-11-25 10:34:42.037 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:43 compute-0 nova_compute[189381]: 2025-11-25 10:34:43.376 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:45 compute-0 podman[240392]: 2025-11-25 10:34:45.968282614 +0000 UTC m=+0.073499714 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:34:45 compute-0 podman[240391]: 2025-11-25 10:34:45.993911641 +0000 UTC m=+0.101932282 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 25 10:34:47 compute-0 nova_compute[189381]: 2025-11-25 10:34:47.042 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:47 compute-0 podman[240429]: 2025-11-25 10:34:47.978232844 +0000 UTC m=+0.096635701 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4)
Nov 25 10:34:48 compute-0 nova_compute[189381]: 2025-11-25 10:34:48.379 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:52 compute-0 nova_compute[189381]: 2025-11-25 10:34:52.044 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:53 compute-0 nova_compute[189381]: 2025-11-25 10:34:53.381 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:54 compute-0 ovn_controller[97779]: 2025-11-25T10:34:54Z|00039|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Nov 25 10:34:54 compute-0 podman[240459]: 2025-11-25 10:34:54.983277363 +0000 UTC m=+0.093360206 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 10:34:57 compute-0 nova_compute[189381]: 2025-11-25 10:34:57.047 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:58 compute-0 nova_compute[189381]: 2025-11-25 10:34:58.384 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:34:59 compute-0 podman[203557]: time="2025-11-25T10:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:34:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:34:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4790 "" "Go-http-client/1.1"
Nov 25 10:34:59 compute-0 podman[240483]: 2025-11-25 10:34:59.955452645 +0000 UTC m=+0.068915753 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:34:59 compute-0 podman[240482]: 2025-11-25 10:34:59.956474604 +0000 UTC m=+0.075063150 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 10:35:01 compute-0 nova_compute[189381]: 2025-11-25 10:35:01.046 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:01 compute-0 openstack_network_exporter[205722]: ERROR   10:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:35:01 compute-0 openstack_network_exporter[205722]: ERROR   10:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:35:01 compute-0 openstack_network_exporter[205722]: ERROR   10:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:35:01 compute-0 openstack_network_exporter[205722]: ERROR   10:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:35:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:35:01 compute-0 openstack_network_exporter[205722]: ERROR   10:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:35:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:35:02 compute-0 podman[240540]: 2025-11-25 10:35:02.00418164 +0000 UTC m=+0.115000709 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:35:02 compute-0 nova_compute[189381]: 2025-11-25 10:35:02.049 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:02 compute-0 ovn_controller[97779]: 2025-11-25T10:35:02Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:45:ac 192.168.0.71
Nov 25 10:35:02 compute-0 ovn_controller[97779]: 2025-11-25T10:35:02Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:45:ac 192.168.0.71
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.327 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.328 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.334 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 44e7d3d0-d059-412e-a1a9-467d774d2bee from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 10:35:03 compute-0 nova_compute[189381]: 2025-11-25 10:35:03.389 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:03.725 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/44e7d3d0-d059-412e-a1a9-467d774d2bee -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 10:35:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:04.659 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 25 Nov 2025 10:35:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7a9d4196-4b16-4224-b909-da5e365204e1 x-openstack-request-id: req-7a9d4196-4b16-4224-b909-da5e365204e1 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 10:35:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:04.660 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "44e7d3d0-d059-412e-a1a9-467d774d2bee", "name": "vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng", "status": "ACTIVE", "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "user_id": "af7a147d86064a21a94066f72173bba2", "metadata": {"metering.server_group": "d1a74954-729e-4b7f-a26d-ccdc925aa15b"}, "hostId": "5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd", "image": {"id": "d3f57a9d-2502-43be-9afd-d2b6e1c15c08", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d3f57a9d-2502-43be-9afd-d2b6e1c15c08"}]}, "flavor": {"id": "8b869036-db8e-4fd3-b57a-e59e272f3c73", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8b869036-db8e-4fd3-b57a-e59e272f3c73"}]}, "created": "2025-11-25T10:34:17Z", "updated": "2025-11-25T10:34:25Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.71", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ab:45:ac"}, {"version": 4, "addr": "192.168.122.221", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ab:45:ac"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/44e7d3d0-d059-412e-a1a9-467d774d2bee"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/44e7d3d0-d059-412e-a1a9-467d774d2bee"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-25T10:34:25.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 10:35:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:04.660 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/44e7d3d0-d059-412e-a1a9-467d774d2bee used request id req-7a9d4196-4b16-4224-b909-da5e365204e1 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 10:35:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:04.661 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'name': 'vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:35:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:04.664 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 31174924-a3e8-4662-baad-ac9aa49c01ab from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 10:35:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:04.664 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/31174924-a3e8-4662-baad-ac9aa49c01ab -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.036 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Tue, 25 Nov 2025 10:35:04 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ab40bd3c-2837-40b1-aeea-68281de2a4de x-openstack-request-id: req-ab40bd3c-2837-40b1-aeea-68281de2a4de _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.037 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "31174924-a3e8-4662-baad-ac9aa49c01ab", "name": "test_0", "status": "ACTIVE", "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "user_id": "af7a147d86064a21a94066f72173bba2", "metadata": {}, "hostId": "5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd", "image": {"id": "d3f57a9d-2502-43be-9afd-d2b6e1c15c08", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d3f57a9d-2502-43be-9afd-d2b6e1c15c08"}]}, "flavor": {"id": "8b869036-db8e-4fd3-b57a-e59e272f3c73", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8b869036-db8e-4fd3-b57a-e59e272f3c73"}]}, "created": "2025-11-25T10:32:53Z", "updated": "2025-11-25T10:33:09Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.95", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f3:39:09"}, {"version": 4, "addr": "192.168.122.239", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f3:39:09"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/31174924-a3e8-4662-baad-ac9aa49c01ab"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/31174924-a3e8-4662-baad-ac9aa49c01ab"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-25T10:33:09.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.037 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/31174924-a3e8-4662-baad-ac9aa49c01ab used request id req-ab40bd3c-2837-40b1-aeea-68281de2a4de request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.038 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:35:05.038862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.045 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 44e7d3d0-d059-412e-a1a9-467d774d2bee / tapc7376e3d-20 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.046 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes volume: 1421 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.049 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.051 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 31174924-a3e8-4662-baad-ac9aa49c01ab / tapb6cf5c87-86 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.052 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2174 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.053 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.053 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.054 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:35:05.054079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.054 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.055 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.055 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.057 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:35:05.056390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.087 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/memory.usage volume: 33.1953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.116 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.117 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T10:35:05.117337) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.118 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng>, <NovaLikeServer: test_0>]
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.119 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.119 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:35:05.119163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.119 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.120 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:35:05.120523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.121 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.122 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:35:05.121813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.122 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.123 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/cpu volume: 34510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:35:05.123078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.123 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 38650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.123 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.124 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.124 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.124 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:35:05.124378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.125 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:35:05.126006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.155 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.156 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.156 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.162 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.185 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.186 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.186 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.188 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:35:05.188394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 podman[240566]: 2025-11-25 10:35:05.201195097 +0000 UTC m=+0.095205350 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.232 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.234 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.265 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.266 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.266 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.299 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.300 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.337 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.338 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.338 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.340 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.341 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 1589454420 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.342 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 365927498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:35:05.341021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.342 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 408314029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.343 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.344 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.344 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.345 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.346 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.346 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:35:05.345796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.346 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.347 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.347 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:35:05.348714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.348 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.349 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.349 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.349 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.350 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.350 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 41590784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.351 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:35:05.351428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.352 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.352 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.352 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.353 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.354 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.354 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 31716504486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.355 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 231382257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.355 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:35:05.354515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.355 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.356 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.356 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.357 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.357 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.358 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:35:05.357607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.359 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.359 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:35:05.359112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.360 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.360 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.360 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.360 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.361 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.362 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:35:05.362034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.362 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.363 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:35:05.363722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.364 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.364 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.364 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.365 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.365 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.366 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.366 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.366 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng>, <NovaLikeServer: test_0>]
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T10:35:05.366338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.367 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:35:05.367497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.368 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.369 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:35:05.368913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.369 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.369 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.370 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.370 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.370 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:35:05.370439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.370 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.371 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.371 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.371 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:35:05.371490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.372 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.370 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.371 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.372 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:35:05.373322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.373 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.373 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.375 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.376 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:35:05.377 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.428 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.435 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.500 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.501 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.561 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.562 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.623 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.624 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:35:05 compute-0 nova_compute[189381]: 2025-11-25 10:35:05.696 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.072 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.074 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5066MB free_disk=72.18635177612305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.074 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.075 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.430 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.430 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.431 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.431 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.498 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.512 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.541 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:35:06 compute-0 nova_compute[189381]: 2025-11-25 10:35:06.541 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:35:07 compute-0 nova_compute[189381]: 2025-11-25 10:35:07.052 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:08 compute-0 nova_compute[189381]: 2025-11-25 10:35:08.391 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:08 compute-0 nova_compute[189381]: 2025-11-25 10:35:08.536 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:08 compute-0 nova_compute[189381]: 2025-11-25 10:35:08.536 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:08 compute-0 nova_compute[189381]: 2025-11-25 10:35:08.536 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:35:08 compute-0 nova_compute[189381]: 2025-11-25 10:35:08.537 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:35:10 compute-0 nova_compute[189381]: 2025-11-25 10:35:10.297 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:35:10 compute-0 nova_compute[189381]: 2025-11-25 10:35:10.297 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:35:10 compute-0 nova_compute[189381]: 2025-11-25 10:35:10.297 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:35:10 compute-0 nova_compute[189381]: 2025-11-25 10:35:10.298 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:35:10 compute-0 podman[240609]: 2025-11-25 10:35:10.961106199 +0000 UTC m=+0.070512319 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:35:12 compute-0 nova_compute[189381]: 2025-11-25 10:35:12.054 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:12 compute-0 nova_compute[189381]: 2025-11-25 10:35:12.454 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:35:12 compute-0 nova_compute[189381]: 2025-11-25 10:35:12.467 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:35:12 compute-0 nova_compute[189381]: 2025-11-25 10:35:12.468 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:35:12 compute-0 nova_compute[189381]: 2025-11-25 10:35:12.469 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:12 compute-0 nova_compute[189381]: 2025-11-25 10:35:12.469 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:35:13 compute-0 nova_compute[189381]: 2025-11-25 10:35:13.394 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:16 compute-0 podman[240634]: 2025-11-25 10:35:16.991973419 +0000 UTC m=+0.104234600 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:35:16 compute-0 podman[240635]: 2025-11-25 10:35:16.993996277 +0000 UTC m=+0.101720447 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 10:35:17 compute-0 nova_compute[189381]: 2025-11-25 10:35:17.057 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:18 compute-0 nova_compute[189381]: 2025-11-25 10:35:18.397 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:18 compute-0 podman[240672]: 2025-11-25 10:35:18.956991247 +0000 UTC m=+0.071119007 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Nov 25 10:35:22 compute-0 nova_compute[189381]: 2025-11-25 10:35:22.060 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:23 compute-0 nova_compute[189381]: 2025-11-25 10:35:23.400 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:25 compute-0 podman[240694]: 2025-11-25 10:35:25.949423002 +0000 UTC m=+0.067286816 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:35:27 compute-0 nova_compute[189381]: 2025-11-25 10:35:27.062 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:28 compute-0 nova_compute[189381]: 2025-11-25 10:35:28.403 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:29 compute-0 podman[203557]: time="2025-11-25T10:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:35:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:35:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Nov 25 10:35:30 compute-0 podman[240712]: 2025-11-25 10:35:30.949832919 +0000 UTC m=+0.061843470 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:35:30 compute-0 podman[240711]: 2025-11-25 10:35:30.975996942 +0000 UTC m=+0.092294436 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Nov 25 10:35:31 compute-0 openstack_network_exporter[205722]: ERROR   10:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:35:31 compute-0 openstack_network_exporter[205722]: ERROR   10:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:35:31 compute-0 openstack_network_exporter[205722]: ERROR   10:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:35:31 compute-0 openstack_network_exporter[205722]: ERROR   10:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:35:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:35:31 compute-0 openstack_network_exporter[205722]: ERROR   10:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:35:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:35:32 compute-0 nova_compute[189381]: 2025-11-25 10:35:32.064 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:33 compute-0 podman[240753]: 2025-11-25 10:35:33.006615025 +0000 UTC m=+0.118125989 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:35:33 compute-0 nova_compute[189381]: 2025-11-25 10:35:33.405 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:35 compute-0 podman[240778]: 2025-11-25 10:35:35.972521615 +0000 UTC m=+0.083921105 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 10:35:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:35:36.035 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:35:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:35:36.036 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:35:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:35:36.037 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:35:37 compute-0 nova_compute[189381]: 2025-11-25 10:35:37.067 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:38 compute-0 nova_compute[189381]: 2025-11-25 10:35:38.407 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:41 compute-0 podman[240797]: 2025-11-25 10:35:41.9524849 +0000 UTC m=+0.067017752 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:35:42 compute-0 nova_compute[189381]: 2025-11-25 10:35:42.069 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:43 compute-0 nova_compute[189381]: 2025-11-25 10:35:43.410 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:47 compute-0 nova_compute[189381]: 2025-11-25 10:35:47.072 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:47 compute-0 podman[240823]: 2025-11-25 10:35:47.982739015 +0000 UTC m=+0.091051438 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:35:47 compute-0 podman[240822]: 2025-11-25 10:35:47.994870126 +0000 UTC m=+0.102212961 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 25 10:35:48 compute-0 nova_compute[189381]: 2025-11-25 10:35:48.414 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:49 compute-0 podman[240860]: 2025-11-25 10:35:49.966982468 +0000 UTC m=+0.082480470 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=)
Nov 25 10:35:52 compute-0 nova_compute[189381]: 2025-11-25 10:35:52.074 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:53 compute-0 nova_compute[189381]: 2025-11-25 10:35:53.417 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:56 compute-0 podman[240878]: 2025-11-25 10:35:56.95644745 +0000 UTC m=+0.070012169 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:35:57 compute-0 nova_compute[189381]: 2025-11-25 10:35:57.076 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:58 compute-0 nova_compute[189381]: 2025-11-25 10:35:58.420 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:35:59 compute-0 podman[203557]: time="2025-11-25T10:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:35:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:35:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Nov 25 10:36:01 compute-0 openstack_network_exporter[205722]: ERROR   10:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:36:01 compute-0 openstack_network_exporter[205722]: ERROR   10:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:36:01 compute-0 openstack_network_exporter[205722]: ERROR   10:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:36:01 compute-0 openstack_network_exporter[205722]: ERROR   10:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:36:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:36:01 compute-0 openstack_network_exporter[205722]: ERROR   10:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:36:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:36:01 compute-0 podman[240899]: 2025-11-25 10:36:01.985734707 +0000 UTC m=+0.087330090 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:36:01 compute-0 podman[240898]: 2025-11-25 10:36:01.992870403 +0000 UTC m=+0.098836433 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 10:36:02 compute-0 nova_compute[189381]: 2025-11-25 10:36:02.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:02 compute-0 nova_compute[189381]: 2025-11-25 10:36:02.079 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:03 compute-0 nova_compute[189381]: 2025-11-25 10:36:03.422 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:04 compute-0 podman[240940]: 2025-11-25 10:36:04.008160866 +0000 UTC m=+0.119979546 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:36:05 compute-0 nova_compute[189381]: 2025-11-25 10:36:05.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.024 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.054 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.133 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.201 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.202 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.275 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.277 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.338 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.338 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.414 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.421 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.479 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.480 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.553 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.555 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.622 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.623 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:36:06 compute-0 nova_compute[189381]: 2025-11-25 10:36:06.708 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:36:06 compute-0 podman[240990]: 2025-11-25 10:36:06.97448421 +0000 UTC m=+0.083548190 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.052 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.053 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=72.18637084960938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.084 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.127 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.127 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.127 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.128 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.206 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.225 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.226 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:36:07 compute-0 nova_compute[189381]: 2025-11-25 10:36:07.227 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:36:08 compute-0 nova_compute[189381]: 2025-11-25 10:36:08.228 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:08 compute-0 nova_compute[189381]: 2025-11-25 10:36:08.425 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:09 compute-0 nova_compute[189381]: 2025-11-25 10:36:09.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:09 compute-0 nova_compute[189381]: 2025-11-25 10:36:09.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:09 compute-0 nova_compute[189381]: 2025-11-25 10:36:09.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:36:09 compute-0 nova_compute[189381]: 2025-11-25 10:36:09.522 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:36:09 compute-0 nova_compute[189381]: 2025-11-25 10:36:09.523 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:36:09 compute-0 nova_compute[189381]: 2025-11-25 10:36:09.523 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:36:10 compute-0 nova_compute[189381]: 2025-11-25 10:36:10.555 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:36:10 compute-0 nova_compute[189381]: 2025-11-25 10:36:10.611 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:36:10 compute-0 nova_compute[189381]: 2025-11-25 10:36:10.612 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:36:10 compute-0 nova_compute[189381]: 2025-11-25 10:36:10.612 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:12 compute-0 nova_compute[189381]: 2025-11-25 10:36:12.085 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:12 compute-0 podman[241011]: 2025-11-25 10:36:12.953207252 +0000 UTC m=+0.067421024 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:36:13 compute-0 nova_compute[189381]: 2025-11-25 10:36:13.428 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:15 compute-0 nova_compute[189381]: 2025-11-25 10:36:15.606 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:36:17 compute-0 nova_compute[189381]: 2025-11-25 10:36:17.087 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:18 compute-0 nova_compute[189381]: 2025-11-25 10:36:18.431 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:18 compute-0 podman[241036]: 2025-11-25 10:36:18.96592501 +0000 UTC m=+0.080619215 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:36:18 compute-0 podman[241037]: 2025-11-25 10:36:18.994739744 +0000 UTC m=+0.103239120 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 10:36:20 compute-0 podman[241076]: 2025-11-25 10:36:20.960520763 +0000 UTC m=+0.075446475 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, name=ubi9, io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 25 10:36:22 compute-0 nova_compute[189381]: 2025-11-25 10:36:22.089 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:23 compute-0 nova_compute[189381]: 2025-11-25 10:36:23.433 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:25 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 10:36:27 compute-0 nova_compute[189381]: 2025-11-25 10:36:27.091 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:27 compute-0 podman[241099]: 2025-11-25 10:36:27.9549102 +0000 UTC m=+0.073557502 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:36:28 compute-0 nova_compute[189381]: 2025-11-25 10:36:28.435 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:29 compute-0 podman[203557]: time="2025-11-25T10:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:36:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:36:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Nov 25 10:36:31 compute-0 openstack_network_exporter[205722]: ERROR   10:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:36:31 compute-0 openstack_network_exporter[205722]: ERROR   10:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:36:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:36:31 compute-0 openstack_network_exporter[205722]: ERROR   10:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:36:31 compute-0 openstack_network_exporter[205722]: ERROR   10:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:36:31 compute-0 openstack_network_exporter[205722]: ERROR   10:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:36:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:36:32 compute-0 nova_compute[189381]: 2025-11-25 10:36:32.093 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:32 compute-0 podman[241117]: 2025-11-25 10:36:32.95285545 +0000 UTC m=+0.059128304 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:36:32 compute-0 podman[241116]: 2025-11-25 10:36:32.959204104 +0000 UTC m=+0.068832345 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:36:33 compute-0 nova_compute[189381]: 2025-11-25 10:36:33.439 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:34 compute-0 podman[241157]: 2025-11-25 10:36:34.984565878 +0000 UTC m=+0.099090331 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 10:36:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:36:36.036 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:36:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:36:36.037 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:36:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:36:36.037 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:36:37 compute-0 nova_compute[189381]: 2025-11-25 10:36:37.095 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:37 compute-0 podman[241183]: 2025-11-25 10:36:37.983491556 +0000 UTC m=+0.094125477 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:36:38 compute-0 nova_compute[189381]: 2025-11-25 10:36:38.441 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:42 compute-0 nova_compute[189381]: 2025-11-25 10:36:42.097 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:43 compute-0 nova_compute[189381]: 2025-11-25 10:36:43.446 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:43 compute-0 podman[241202]: 2025-11-25 10:36:43.95706573 +0000 UTC m=+0.071131981 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:36:47 compute-0 nova_compute[189381]: 2025-11-25 10:36:47.100 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:48 compute-0 nova_compute[189381]: 2025-11-25 10:36:48.450 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:49 compute-0 podman[241228]: 2025-11-25 10:36:49.980500868 +0000 UTC m=+0.081137981 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 10:36:50 compute-0 podman[241227]: 2025-11-25 10:36:50.003493674 +0000 UTC m=+0.111054987 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 10:36:51 compute-0 podman[241262]: 2025-11-25 10:36:51.993922576 +0000 UTC m=+0.106233218 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, com.redhat.component=ubi9-container, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, release=1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Nov 25 10:36:52 compute-0 nova_compute[189381]: 2025-11-25 10:36:52.103 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:53 compute-0 nova_compute[189381]: 2025-11-25 10:36:53.455 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:57 compute-0 nova_compute[189381]: 2025-11-25 10:36:57.106 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:58 compute-0 nova_compute[189381]: 2025-11-25 10:36:58.458 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:36:59 compute-0 podman[241288]: 2025-11-25 10:36:59.005069648 +0000 UTC m=+0.109272325 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:36:59 compute-0 podman[203557]: time="2025-11-25T10:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:36:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:36:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Nov 25 10:37:01 compute-0 openstack_network_exporter[205722]: ERROR   10:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:37:01 compute-0 openstack_network_exporter[205722]: ERROR   10:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:37:01 compute-0 openstack_network_exporter[205722]: ERROR   10:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:37:01 compute-0 openstack_network_exporter[205722]: ERROR   10:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:37:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:37:01 compute-0 openstack_network_exporter[205722]: ERROR   10:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:37:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:37:02 compute-0 nova_compute[189381]: 2025-11-25 10:37:02.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:02 compute-0 nova_compute[189381]: 2025-11-25 10:37:02.111 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.329 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.331 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.350 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'name': 'vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.357 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.358 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:37:03.359352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.367 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes volume: 4670 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.372 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.373 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.373 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.373 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.374 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.374 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.374 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.374 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes.delta volume: 3249 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.374 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.375 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:37:03.374299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:37:03.376242) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.400 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/memory.usage volume: 49.1171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.424 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.426 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.426 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.426 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.427 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.428 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.429 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:37:03.426657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.429 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:37:03.428516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:37:03.430287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.430 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.430 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.431 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:37:03.432062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.432 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/cpu volume: 83050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.432 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 39830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.433 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.433 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:37:03.433720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.434 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:37:03.435312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.460 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.461 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.461 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 nova_compute[189381]: 2025-11-25 10:37:03.463 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.495 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.496 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.497 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.497 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.498 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:37:03.498254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.575 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.576 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.577 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.686 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.687 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.687 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.688 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.689 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 1593102466 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.689 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 365927498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.689 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 408314029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:37:03.688624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.689 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.690 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.690 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.691 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.691 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.691 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:37:03.691276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.692 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.692 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.692 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.692 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.693 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.693 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.693 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.693 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.693 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.694 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.694 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.694 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:37:03.693815) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.694 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.695 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.695 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.696 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.696 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.696 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.697 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:37:03.696317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.697 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.697 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.698 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.698 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.698 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.698 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.698 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.698 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.699 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.699 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 31878521808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.699 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:37:03.699039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.699 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 231382257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.699 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.700 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.700 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.700 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:37:03.701526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.702 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.702 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.702 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.702 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.703 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.703 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.703 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:37:03.703002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.704 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.704 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.704 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:37:03.705516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.705 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes.delta volume: 3405 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.706 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.707 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.707 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.707 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.707 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.708 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.708 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.708 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:37:03.706899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.709 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:37:03.709478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:37:03.710573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.710 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.711 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:37:03.711879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:37:03.712966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.713 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.713 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.714 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.714 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.714 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:37:03.714268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.716 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:37:03.717 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:37:03 compute-0 podman[241309]: 2025-11-25 10:37:03.967973324 +0000 UTC m=+0.080970896 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:37:04 compute-0 podman[241308]: 2025-11-25 10:37:04.013966996 +0000 UTC m=+0.114966301 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, distribution-scope=public, vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:37:05 compute-0 podman[241352]: 2025-11-25 10:37:05.987825088 +0000 UTC m=+0.103040725 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.050 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.125 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.188 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.189 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.250 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.252 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.314 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.315 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.391 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.400 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.470 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.473 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.537 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.550 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.612 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.614 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:37:06 compute-0 nova_compute[189381]: 2025-11-25 10:37:06.674 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.040 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.042 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=72.18637084960938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.043 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.043 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.114 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.298 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.299 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.299 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.300 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.368 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.380 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.382 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:37:07 compute-0 nova_compute[189381]: 2025-11-25 10:37:07.382 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:37:08 compute-0 nova_compute[189381]: 2025-11-25 10:37:08.386 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:08 compute-0 nova_compute[189381]: 2025-11-25 10:37:08.388 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:08 compute-0 nova_compute[189381]: 2025-11-25 10:37:08.389 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:08 compute-0 nova_compute[189381]: 2025-11-25 10:37:08.390 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:37:08 compute-0 nova_compute[189381]: 2025-11-25 10:37:08.466 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:08 compute-0 sshd-session[241402]: Connection closed by authenticating user root 171.244.51.45 port 36660 [preauth]
Nov 25 10:37:08 compute-0 podman[241404]: 2025-11-25 10:37:08.984612695 +0000 UTC m=+0.087215717 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 25 10:37:09 compute-0 nova_compute[189381]: 2025-11-25 10:37:09.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:09 compute-0 nova_compute[189381]: 2025-11-25 10:37:09.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:11 compute-0 nova_compute[189381]: 2025-11-25 10:37:11.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:11 compute-0 nova_compute[189381]: 2025-11-25 10:37:11.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:37:11 compute-0 nova_compute[189381]: 2025-11-25 10:37:11.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:37:11 compute-0 nova_compute[189381]: 2025-11-25 10:37:11.551 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:37:11 compute-0 nova_compute[189381]: 2025-11-25 10:37:11.555 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:37:11 compute-0 nova_compute[189381]: 2025-11-25 10:37:11.556 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:37:11 compute-0 nova_compute[189381]: 2025-11-25 10:37:11.557 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:37:12 compute-0 nova_compute[189381]: 2025-11-25 10:37:12.116 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:12 compute-0 nova_compute[189381]: 2025-11-25 10:37:12.951 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:37:12 compute-0 nova_compute[189381]: 2025-11-25 10:37:12.969 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:37:12 compute-0 nova_compute[189381]: 2025-11-25 10:37:12.971 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:37:12 compute-0 nova_compute[189381]: 2025-11-25 10:37:12.973 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:37:13 compute-0 nova_compute[189381]: 2025-11-25 10:37:13.471 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:14 compute-0 podman[241423]: 2025-11-25 10:37:14.743129431 +0000 UTC m=+0.067922818 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:37:17 compute-0 nova_compute[189381]: 2025-11-25 10:37:17.120 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:18 compute-0 nova_compute[189381]: 2025-11-25 10:37:18.475 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:20 compute-0 podman[241449]: 2025-11-25 10:37:20.974597703 +0000 UTC m=+0.074710485 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 10:37:20 compute-0 podman[241448]: 2025-11-25 10:37:20.980504784 +0000 UTC m=+0.085624681 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:37:22 compute-0 nova_compute[189381]: 2025-11-25 10:37:22.122 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:22 compute-0 podman[241488]: 2025-11-25 10:37:22.976935971 +0000 UTC m=+0.073730756 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.4, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0)
Nov 25 10:37:23 compute-0 nova_compute[189381]: 2025-11-25 10:37:23.480 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:27 compute-0 nova_compute[189381]: 2025-11-25 10:37:27.126 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:28 compute-0 nova_compute[189381]: 2025-11-25 10:37:28.485 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:29 compute-0 podman[203557]: time="2025-11-25T10:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:37:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:37:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4787 "" "Go-http-client/1.1"
Nov 25 10:37:29 compute-0 podman[241510]: 2025-11-25 10:37:29.972234744 +0000 UTC m=+0.073519350 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 25 10:37:31 compute-0 openstack_network_exporter[205722]: ERROR   10:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:37:31 compute-0 openstack_network_exporter[205722]: ERROR   10:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:37:31 compute-0 openstack_network_exporter[205722]: ERROR   10:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:37:31 compute-0 openstack_network_exporter[205722]: ERROR   10:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:37:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:37:31 compute-0 openstack_network_exporter[205722]: ERROR   10:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:37:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:37:32 compute-0 nova_compute[189381]: 2025-11-25 10:37:32.129 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:33 compute-0 nova_compute[189381]: 2025-11-25 10:37:33.489 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:34 compute-0 podman[241528]: 2025-11-25 10:37:34.967976739 +0000 UTC m=+0.083919801 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6)
Nov 25 10:37:34 compute-0 podman[241529]: 2025-11-25 10:37:34.97318358 +0000 UTC m=+0.077323710 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:37:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:37:36.037 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:37:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:37:36.038 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:37:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:37:36.039 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:37:37 compute-0 podman[241569]: 2025-11-25 10:37:37.011065487 +0000 UTC m=+0.121480019 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:37:37 compute-0 nova_compute[189381]: 2025-11-25 10:37:37.132 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:38 compute-0 nova_compute[189381]: 2025-11-25 10:37:38.493 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:39 compute-0 podman[241595]: 2025-11-25 10:37:39.977435893 +0000 UTC m=+0.083072787 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:37:42 compute-0 nova_compute[189381]: 2025-11-25 10:37:42.136 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:43 compute-0 irqbalance[818]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 25 10:37:43 compute-0 irqbalance[818]: IRQ 26 affinity is now unmanaged
Nov 25 10:37:43 compute-0 nova_compute[189381]: 2025-11-25 10:37:43.495 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:44 compute-0 podman[241616]: 2025-11-25 10:37:44.976840994 +0000 UTC m=+0.076659651 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:37:47 compute-0 nova_compute[189381]: 2025-11-25 10:37:47.136 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:48 compute-0 nova_compute[189381]: 2025-11-25 10:37:48.499 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:51 compute-0 podman[241642]: 2025-11-25 10:37:51.973208718 +0000 UTC m=+0.082878744 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Nov 25 10:37:51 compute-0 podman[241641]: 2025-11-25 10:37:51.99690292 +0000 UTC m=+0.111648392 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:37:52 compute-0 nova_compute[189381]: 2025-11-25 10:37:52.142 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:53 compute-0 nova_compute[189381]: 2025-11-25 10:37:53.502 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:53 compute-0 podman[241678]: 2025-11-25 10:37:53.978776312 +0000 UTC m=+0.089165006 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543)
Nov 25 10:37:57 compute-0 nova_compute[189381]: 2025-11-25 10:37:57.143 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:58 compute-0 nova_compute[189381]: 2025-11-25 10:37:58.506 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:37:59 compute-0 podman[203557]: time="2025-11-25T10:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:37:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:37:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 25 10:38:00 compute-0 podman[241697]: 2025-11-25 10:38:00.99147069 +0000 UTC m=+0.098597797 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:38:01 compute-0 openstack_network_exporter[205722]: ERROR   10:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:38:01 compute-0 openstack_network_exporter[205722]: ERROR   10:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:38:01 compute-0 openstack_network_exporter[205722]: ERROR   10:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:38:01 compute-0 openstack_network_exporter[205722]: ERROR   10:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:38:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:38:01 compute-0 openstack_network_exporter[205722]: ERROR   10:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:38:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:38:02 compute-0 nova_compute[189381]: 2025-11-25 10:38:02.145 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:03 compute-0 nova_compute[189381]: 2025-11-25 10:38:03.509 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:04 compute-0 nova_compute[189381]: 2025-11-25 10:38:04.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:05 compute-0 podman[241716]: 2025-11-25 10:38:05.960692845 +0000 UTC m=+0.074597377 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:38:05 compute-0 podman[241717]: 2025-11-25 10:38:05.965971906 +0000 UTC m=+0.076737748 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.052 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.052 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.052 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.052 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.131 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.195 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.196 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.259 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.262 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.324 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.326 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.386 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.395 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.461 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.463 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.523 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.525 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.590 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.591 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:38:06 compute-0 nova_compute[189381]: 2025-11-25 10:38:06.652 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.550 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.707 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.709 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5045MB free_disk=72.18637084960938GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.710 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.710 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.786 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.787 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.788 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.788 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.802 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.821 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.822 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.842 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.862 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.924 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.940 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.941 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:38:07 compute-0 nova_compute[189381]: 2025-11-25 10:38:07.942 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:38:07 compute-0 podman[241785]: 2025-11-25 10:38:07.99447383 +0000 UTC m=+0.104325992 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 10:38:08 compute-0 nova_compute[189381]: 2025-11-25 10:38:08.511 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:09 compute-0 nova_compute[189381]: 2025-11-25 10:38:09.942 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:09 compute-0 nova_compute[189381]: 2025-11-25 10:38:09.944 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:09 compute-0 nova_compute[189381]: 2025-11-25 10:38:09.945 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:09 compute-0 nova_compute[189381]: 2025-11-25 10:38:09.945 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:09 compute-0 nova_compute[189381]: 2025-11-25 10:38:09.946 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:38:10 compute-0 nova_compute[189381]: 2025-11-25 10:38:10.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:10 compute-0 podman[241810]: 2025-11-25 10:38:10.955901136 +0000 UTC m=+0.073579438 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 10:38:11 compute-0 nova_compute[189381]: 2025-11-25 10:38:11.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:11 compute-0 nova_compute[189381]: 2025-11-25 10:38:11.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:38:11 compute-0 nova_compute[189381]: 2025-11-25 10:38:11.561 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:38:11 compute-0 nova_compute[189381]: 2025-11-25 10:38:11.562 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:38:11 compute-0 nova_compute[189381]: 2025-11-25 10:38:11.563 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:38:12 compute-0 nova_compute[189381]: 2025-11-25 10:38:12.557 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:12 compute-0 nova_compute[189381]: 2025-11-25 10:38:12.822 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:38:12 compute-0 nova_compute[189381]: 2025-11-25 10:38:12.836 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:38:12 compute-0 nova_compute[189381]: 2025-11-25 10:38:12.837 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:38:13 compute-0 nova_compute[189381]: 2025-11-25 10:38:13.515 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:14 compute-0 nova_compute[189381]: 2025-11-25 10:38:14.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:15 compute-0 podman[241831]: 2025-11-25 10:38:15.955224796 +0000 UTC m=+0.068179352 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:38:16 compute-0 nova_compute[189381]: 2025-11-25 10:38:16.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:17 compute-0 nova_compute[189381]: 2025-11-25 10:38:17.555 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:18 compute-0 nova_compute[189381]: 2025-11-25 10:38:18.519 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:22 compute-0 nova_compute[189381]: 2025-11-25 10:38:22.562 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:22 compute-0 podman[241858]: 2025-11-25 10:38:22.983233184 +0000 UTC m=+0.079985331 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:38:23 compute-0 podman[241857]: 2025-11-25 10:38:23.00848001 +0000 UTC m=+0.109657545 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:38:23 compute-0 nova_compute[189381]: 2025-11-25 10:38:23.523 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:24 compute-0 podman[241895]: 2025-11-25 10:38:24.972216611 +0000 UTC m=+0.073076393 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, name=ubi9, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, version=9.4, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc.)
Nov 25 10:38:27 compute-0 nova_compute[189381]: 2025-11-25 10:38:27.568 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:28 compute-0 nova_compute[189381]: 2025-11-25 10:38:28.525 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:29 compute-0 podman[203557]: time="2025-11-25T10:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:38:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:38:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 25 10:38:31 compute-0 openstack_network_exporter[205722]: ERROR   10:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:38:31 compute-0 openstack_network_exporter[205722]: ERROR   10:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:38:31 compute-0 openstack_network_exporter[205722]: ERROR   10:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:38:31 compute-0 openstack_network_exporter[205722]: ERROR   10:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:38:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:38:31 compute-0 openstack_network_exporter[205722]: ERROR   10:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:38:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:38:31 compute-0 podman[241914]: 2025-11-25 10:38:31.95472082 +0000 UTC m=+0.064673191 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:38:32 compute-0 nova_compute[189381]: 2025-11-25 10:38:32.572 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:33 compute-0 nova_compute[189381]: 2025-11-25 10:38:33.528 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:38:36.037 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:38:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:38:36.038 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:38:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:38:36.038 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:38:36 compute-0 podman[241933]: 2025-11-25 10:38:36.975740764 +0000 UTC m=+0.077421078 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:38:36 compute-0 podman[241932]: 2025-11-25 10:38:36.989527681 +0000 UTC m=+0.094566541 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 25 10:38:37 compute-0 nova_compute[189381]: 2025-11-25 10:38:37.575 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:38 compute-0 nova_compute[189381]: 2025-11-25 10:38:38.533 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:39 compute-0 podman[241975]: 2025-11-25 10:38:39.001013284 +0000 UTC m=+0.108368488 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:38:41 compute-0 podman[242001]: 2025-11-25 10:38:41.972100343 +0000 UTC m=+0.076944964 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:38:42 compute-0 nova_compute[189381]: 2025-11-25 10:38:42.579 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:43 compute-0 nova_compute[189381]: 2025-11-25 10:38:43.536 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:46 compute-0 podman[242023]: 2025-11-25 10:38:46.959780487 +0000 UTC m=+0.068492191 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:38:47 compute-0 nova_compute[189381]: 2025-11-25 10:38:47.580 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:48 compute-0 nova_compute[189381]: 2025-11-25 10:38:48.539 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:52 compute-0 nova_compute[189381]: 2025-11-25 10:38:52.580 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:53 compute-0 nova_compute[189381]: 2025-11-25 10:38:53.543 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:53 compute-0 podman[242048]: 2025-11-25 10:38:53.961589352 +0000 UTC m=+0.076014337 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible)
Nov 25 10:38:53 compute-0 podman[242049]: 2025-11-25 10:38:53.967236585 +0000 UTC m=+0.077152520 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Nov 25 10:38:55 compute-0 podman[242087]: 2025-11-25 10:38:55.981757847 +0000 UTC m=+0.089003651 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, distribution-scope=public, name=ubi9, release-0.7.12=, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, managed_by=edpm_ansible, container_name=kepler)
Nov 25 10:38:57 compute-0 nova_compute[189381]: 2025-11-25 10:38:57.583 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:58 compute-0 nova_compute[189381]: 2025-11-25 10:38:58.547 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:38:59 compute-0 nova_compute[189381]: 2025-11-25 10:38:59.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:38:59 compute-0 nova_compute[189381]: 2025-11-25 10:38:59.026 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:38:59 compute-0 podman[203557]: time="2025-11-25T10:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:38:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:38:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 25 10:39:01 compute-0 openstack_network_exporter[205722]: ERROR   10:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:39:01 compute-0 openstack_network_exporter[205722]: ERROR   10:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:39:01 compute-0 openstack_network_exporter[205722]: ERROR   10:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:39:01 compute-0 openstack_network_exporter[205722]: ERROR   10:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:39:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:39:01 compute-0 openstack_network_exporter[205722]: ERROR   10:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:39:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:39:02 compute-0 nova_compute[189381]: 2025-11-25 10:39:02.591 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:02 compute-0 podman[242106]: 2025-11-25 10:39:02.94666843 +0000 UTC m=+0.061519440 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.331 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.331 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.343 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'name': 'vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.346 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:39:03.346789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.351 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes volume: 4740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.356 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.357 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.357 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.357 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.358 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.359 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:39:03.357399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:39:03.358980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.380 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/memory.usage volume: 49.1171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.403 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.404 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.405 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.405 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:39:03.405445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.407 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.407 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.408 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.408 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.408 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.409 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:39:03.406972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:39:03.408373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.410 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/cpu volume: 202560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.410 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 41230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:39:03.410011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.411 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.411 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:39:03.411531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.412 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:39:03.412832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.443 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.443 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.443 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.469 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.470 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.470 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.471 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.471 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:39:03.472060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.535 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.535 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.535 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 nova_compute[189381]: 2025-11-25 10:39:03.551 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.602 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.603 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.604 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.605 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.605 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 1593102466 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.606 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 365927498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.606 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 408314029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.606 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.607 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.607 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.608 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.609 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.609 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.609 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.610 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.610 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.610 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.611 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.611 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.612 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:39:03.605504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.613 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:39:03.609446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.613 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.613 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:39:03.613473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.614 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.614 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.615 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.615 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.615 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.616 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.617 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:39:03.616972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.617 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.618 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.618 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.618 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.619 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.619 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.620 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.620 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.620 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 31878521808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.620 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 231382257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.620 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.621 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:39:03.620177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.621 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.622 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.622 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.623 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.623 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.624 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.624 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.625 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:39:03.623163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:39:03.624729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.625 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.626 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.626 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.626 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.627 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.628 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.628 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.629 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.629 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.629 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.629 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.630 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.630 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.630 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.631 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.631 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.632 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.635 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.635 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:39:03.627828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:39:03.629379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:39:03.631576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.640 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:39:03.632460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:39:03.633427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:39:03.634190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:39:03.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:39:03.635085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.043 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.047 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.091 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.092 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.093 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.094 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.210 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.287 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.288 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.366 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.368 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.428 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.429 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.499 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.507 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.570 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.572 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.640 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.645 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.712 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.714 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:39:06 compute-0 nova_compute[189381]: 2025-11-25 10:39:06.776 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.123 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.126 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5054MB free_disk=72.18669128417969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.126 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.127 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.370 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.372 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.372 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.373 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.592 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.611 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.633 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.635 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.636 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.637 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.638 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:39:07 compute-0 nova_compute[189381]: 2025-11-25 10:39:07.650 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:39:07 compute-0 podman[242151]: 2025-11-25 10:39:07.982240182 +0000 UTC m=+0.085934603 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:39:08 compute-0 podman[242150]: 2025-11-25 10:39:08.00652482 +0000 UTC m=+0.114215996 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Nov 25 10:39:08 compute-0 nova_compute[189381]: 2025-11-25 10:39:08.559 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:09 compute-0 nova_compute[189381]: 2025-11-25 10:39:09.636 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:10 compute-0 podman[242194]: 2025-11-25 10:39:10.005375221 +0000 UTC m=+0.118294693 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:39:10 compute-0 nova_compute[189381]: 2025-11-25 10:39:10.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:10 compute-0 nova_compute[189381]: 2025-11-25 10:39:10.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:39:11 compute-0 nova_compute[189381]: 2025-11-25 10:39:11.017 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:11 compute-0 nova_compute[189381]: 2025-11-25 10:39:11.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.596 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.634 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.635 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.635 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:39:12 compute-0 nova_compute[189381]: 2025-11-25 10:39:12.636 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:39:12 compute-0 podman[242220]: 2025-11-25 10:39:12.967984981 +0000 UTC m=+0.082667989 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:39:13 compute-0 nova_compute[189381]: 2025-11-25 10:39:13.561 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:14 compute-0 nova_compute[189381]: 2025-11-25 10:39:14.059 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:39:14 compute-0 nova_compute[189381]: 2025-11-25 10:39:14.081 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:39:14 compute-0 nova_compute[189381]: 2025-11-25 10:39:14.082 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:39:14 compute-0 nova_compute[189381]: 2025-11-25 10:39:14.083 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:16 compute-0 nova_compute[189381]: 2025-11-25 10:39:16.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:17 compute-0 nova_compute[189381]: 2025-11-25 10:39:17.599 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:17 compute-0 podman[242238]: 2025-11-25 10:39:17.976003233 +0000 UTC m=+0.087323902 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:39:18 compute-0 nova_compute[189381]: 2025-11-25 10:39:18.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:39:18 compute-0 nova_compute[189381]: 2025-11-25 10:39:18.566 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:22 compute-0 nova_compute[189381]: 2025-11-25 10:39:22.602 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:23 compute-0 nova_compute[189381]: 2025-11-25 10:39:23.570 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:24 compute-0 podman[242262]: 2025-11-25 10:39:24.989065644 +0000 UTC m=+0.091574425 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Nov 25 10:39:24 compute-0 podman[242263]: 2025-11-25 10:39:24.990281549 +0000 UTC m=+0.099305697 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118)
Nov 25 10:39:26 compute-0 podman[242303]: 2025-11-25 10:39:26.97908497 +0000 UTC m=+0.089981339 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git)
Nov 25 10:39:27 compute-0 nova_compute[189381]: 2025-11-25 10:39:27.605 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:28 compute-0 nova_compute[189381]: 2025-11-25 10:39:28.572 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:29 compute-0 podman[203557]: time="2025-11-25T10:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:39:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:39:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 25 10:39:31 compute-0 openstack_network_exporter[205722]: ERROR   10:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:39:31 compute-0 openstack_network_exporter[205722]: ERROR   10:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:39:31 compute-0 openstack_network_exporter[205722]: ERROR   10:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:39:31 compute-0 openstack_network_exporter[205722]: ERROR   10:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:39:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:39:31 compute-0 openstack_network_exporter[205722]: ERROR   10:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:39:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:39:32 compute-0 nova_compute[189381]: 2025-11-25 10:39:32.608 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:33 compute-0 nova_compute[189381]: 2025-11-25 10:39:33.575 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:33 compute-0 podman[242323]: 2025-11-25 10:39:33.936689764 +0000 UTC m=+0.053788818 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 10:39:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:39:36.040 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:39:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:39:36.041 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:39:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:39:36.041 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:39:37 compute-0 nova_compute[189381]: 2025-11-25 10:39:37.610 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:38 compute-0 nova_compute[189381]: 2025-11-25 10:39:38.579 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:38 compute-0 podman[242343]: 2025-11-25 10:39:38.992334274 +0000 UTC m=+0.097187747 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:39:38 compute-0 podman[242342]: 2025-11-25 10:39:38.998719727 +0000 UTC m=+0.107906624 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, architecture=x86_64, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter)
Nov 25 10:39:40 compute-0 podman[242388]: 2025-11-25 10:39:40.997735512 +0000 UTC m=+0.110244001 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:39:42 compute-0 nova_compute[189381]: 2025-11-25 10:39:42.611 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:43 compute-0 nova_compute[189381]: 2025-11-25 10:39:43.582 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:43 compute-0 podman[242414]: 2025-11-25 10:39:43.970947927 +0000 UTC m=+0.075306027 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:39:47 compute-0 nova_compute[189381]: 2025-11-25 10:39:47.616 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:48 compute-0 nova_compute[189381]: 2025-11-25 10:39:48.585 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:48 compute-0 podman[242434]: 2025-11-25 10:39:48.971335238 +0000 UTC m=+0.076723988 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:39:52 compute-0 nova_compute[189381]: 2025-11-25 10:39:52.617 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:53 compute-0 nova_compute[189381]: 2025-11-25 10:39:53.588 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:55 compute-0 podman[242460]: 2025-11-25 10:39:55.965126955 +0000 UTC m=+0.076926083 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 10:39:55 compute-0 podman[242459]: 2025-11-25 10:39:55.978678325 +0000 UTC m=+0.093155170 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute)
Nov 25 10:39:57 compute-0 nova_compute[189381]: 2025-11-25 10:39:57.619 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:57 compute-0 podman[242497]: 2025-11-25 10:39:57.969518585 +0000 UTC m=+0.074508344 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9)
Nov 25 10:39:58 compute-0 nova_compute[189381]: 2025-11-25 10:39:58.591 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:39:59 compute-0 podman[203557]: time="2025-11-25T10:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:39:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:39:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 25 10:40:01 compute-0 openstack_network_exporter[205722]: ERROR   10:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:40:01 compute-0 openstack_network_exporter[205722]: ERROR   10:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:40:01 compute-0 openstack_network_exporter[205722]: ERROR   10:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:40:01 compute-0 openstack_network_exporter[205722]: ERROR   10:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:40:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:40:01 compute-0 openstack_network_exporter[205722]: ERROR   10:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:40:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:40:02 compute-0 nova_compute[189381]: 2025-11-25 10:40:02.623 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:03 compute-0 nova_compute[189381]: 2025-11-25 10:40:03.594 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:04 compute-0 podman[242517]: 2025-11-25 10:40:04.971701195 +0000 UTC m=+0.076052608 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 10:40:06 compute-0 nova_compute[189381]: 2025-11-25 10:40:06.031 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.059 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.152 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.223 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.224 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.293 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.295 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.364 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.365 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.426 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.439 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.507 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.508 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.577 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.580 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.626 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.648 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.649 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:07 compute-0 nova_compute[189381]: 2025-11-25 10:40:07.717 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.066 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.068 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=72.18669128417969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.068 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.069 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.158 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.159 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.160 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.160 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.262 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.280 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.282 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.282 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:08 compute-0 nova_compute[189381]: 2025-11-25 10:40:08.597 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:09 compute-0 nova_compute[189381]: 2025-11-25 10:40:09.282 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:09 compute-0 podman[242561]: 2025-11-25 10:40:09.972221088 +0000 UTC m=+0.078997845 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:40:09 compute-0 podman[242560]: 2025-11-25 10:40:09.995341792 +0000 UTC m=+0.104985192 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible)
Nov 25 10:40:11 compute-0 nova_compute[189381]: 2025-11-25 10:40:11.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:11 compute-0 nova_compute[189381]: 2025-11-25 10:40:11.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:40:11 compute-0 podman[242604]: 2025-11-25 10:40:11.989383417 +0000 UTC m=+0.106053494 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 10:40:12 compute-0 nova_compute[189381]: 2025-11-25 10:40:12.017 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:12 compute-0 nova_compute[189381]: 2025-11-25 10:40:12.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:12 compute-0 nova_compute[189381]: 2025-11-25 10:40:12.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:40:12 compute-0 nova_compute[189381]: 2025-11-25 10:40:12.628 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:12 compute-0 nova_compute[189381]: 2025-11-25 10:40:12.778 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:40:12 compute-0 nova_compute[189381]: 2025-11-25 10:40:12.779 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:40:12 compute-0 nova_compute[189381]: 2025-11-25 10:40:12.779 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:40:13 compute-0 nova_compute[189381]: 2025-11-25 10:40:13.602 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:14 compute-0 nova_compute[189381]: 2025-11-25 10:40:14.144 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:40:14 compute-0 nova_compute[189381]: 2025-11-25 10:40:14.162 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:40:14 compute-0 nova_compute[189381]: 2025-11-25 10:40:14.163 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:40:14 compute-0 nova_compute[189381]: 2025-11-25 10:40:14.163 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:14 compute-0 nova_compute[189381]: 2025-11-25 10:40:14.164 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:14 compute-0 podman[242630]: 2025-11-25 10:40:14.751081867 +0000 UTC m=+0.078730327 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 25 10:40:16 compute-0 nova_compute[189381]: 2025-11-25 10:40:16.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:17 compute-0 nova_compute[189381]: 2025-11-25 10:40:17.629 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:18 compute-0 nova_compute[189381]: 2025-11-25 10:40:18.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:40:18 compute-0 nova_compute[189381]: 2025-11-25 10:40:18.605 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:19 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:19.764 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:40:19 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:19.765 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:40:19 compute-0 nova_compute[189381]: 2025-11-25 10:40:19.768 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:19 compute-0 podman[242653]: 2025-11-25 10:40:19.956486283 +0000 UTC m=+0.063584035 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:40:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:20.768 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:40:22 compute-0 nova_compute[189381]: 2025-11-25 10:40:22.632 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:23 compute-0 nova_compute[189381]: 2025-11-25 10:40:23.609 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:26 compute-0 nova_compute[189381]: 2025-11-25 10:40:26.581 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:26 compute-0 nova_compute[189381]: 2025-11-25 10:40:26.582 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:26 compute-0 nova_compute[189381]: 2025-11-25 10:40:26.603 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 10:40:26 compute-0 nova_compute[189381]: 2025-11-25 10:40:26.692 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:26 compute-0 nova_compute[189381]: 2025-11-25 10:40:26.693 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:26 compute-0 nova_compute[189381]: 2025-11-25 10:40:26.704 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 10:40:26 compute-0 nova_compute[189381]: 2025-11-25 10:40:26.705 189385 INFO nova.compute.claims [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 10:40:26 compute-0 podman[242677]: 2025-11-25 10:40:26.966117508 +0000 UTC m=+0.082201058 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118)
Nov 25 10:40:26 compute-0 podman[242678]: 2025-11-25 10:40:26.978380375 +0000 UTC m=+0.084740312 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.107 189385 DEBUG nova.compute.provider_tree [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.132 189385 DEBUG nova.scheduler.client.report [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.160 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.162 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.199 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.200 189385 DEBUG nova.network.neutron [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.213 189385 INFO nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.244 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.343 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.345 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.346 189385 INFO nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Creating image(s)
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.347 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.348 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.349 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.365 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.449 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.451 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "efa46ac01001129056abbd05fc9719c35c46db87" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.452 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.463 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.530 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.532 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.584 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk 1073741824" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.586 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.588 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.636 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.650 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.652 189385 DEBUG nova.virt.disk.api [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Checking if we can resize image /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.652 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.718 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.720 189385 DEBUG nova.virt.disk.api [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Cannot resize image /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.721 189385 DEBUG nova.objects.instance [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'migration_context' on Instance uuid 613e6b77-82b6-426c-90b1-38d6776feb1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.742 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.743 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.744 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.762 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.823 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.824 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.825 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.837 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.898 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.900 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.946 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.947 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:27 compute-0 nova_compute[189381]: 2025-11-25 10:40:27.948 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.012 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.013 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.014 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Ensure instance console log exists: /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.015 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.016 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.016 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.612 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.840 189385 DEBUG nova.network.neutron [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Successfully updated port: 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.866 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.867 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquired lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.868 189385 DEBUG nova.network.neutron [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 10:40:28 compute-0 podman[242741]: 2025-11-25 10:40:28.975238523 +0000 UTC m=+0.076966715 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9)
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.976 189385 DEBUG nova.compute.manager [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received event network-changed-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.977 189385 DEBUG nova.compute.manager [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Refreshing instance network info cache due to event network-changed-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:40:28 compute-0 nova_compute[189381]: 2025-11-25 10:40:28.977 189385 DEBUG oslo_concurrency.lockutils [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:40:29 compute-0 nova_compute[189381]: 2025-11-25 10:40:29.119 189385 DEBUG nova.network.neutron [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:40:29 compute-0 podman[203557]: time="2025-11-25T10:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:40:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:40:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.130 189385 DEBUG nova.network.neutron [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updating instance_info_cache with network_info: [{"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.153 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Releasing lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.154 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Instance network_info: |[{"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.154 189385 DEBUG oslo_concurrency.lockutils [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.155 189385 DEBUG nova.network.neutron [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Refreshing network info cache for port 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.157 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Start _get_guest_xml network_info=[{"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 1, 'device_type': 'disk', 'encrypted': False, 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.165 189385 WARNING nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.175 189385 DEBUG nova.virt.libvirt.host [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.176 189385 DEBUG nova.virt.libvirt.host [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.183 189385 DEBUG nova.virt.libvirt.host [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.184 189385 DEBUG nova.virt.libvirt.host [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.184 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.185 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:31:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8b869036-db8e-4fd3-b57a-e59e272f3c73',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.186 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.186 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.186 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.187 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.187 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.188 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.188 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.189 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.189 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.190 189385 DEBUG nova.virt.hardware [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.193 189385 DEBUG nova.virt.libvirt.vif [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj',id=3,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-lse7ova1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:40:27Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODg0MjI1ODAzNzA1MzM1NzY2NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 25 10:40:30 compute-0 nova_compute[189381]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODg0MjI1ODAzNzA1MzM1NzY2NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=613e6b77-82b6-426c-90b1-38d6776feb1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.194 189385 DEBUG nova.network.os_vif_util [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.195 189385 DEBUG nova.network.os_vif_util [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:5f:ba,bridge_name='br-int',has_traffic_filtering=True,id=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4aa1b3c5-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.196 189385 DEBUG nova.objects.instance [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'pci_devices' on Instance uuid 613e6b77-82b6-426c-90b1-38d6776feb1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.215 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <uuid>613e6b77-82b6-426c-90b1-38d6776feb1f</uuid>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <name>instance-00000003</name>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <memory>524288</memory>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <metadata>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <nova:name>vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj</nova:name>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 10:40:30</nova:creationTime>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <nova:flavor name="m1.small">
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:memory>512</nova:memory>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:ephemeral>1</nova:ephemeral>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:user uuid="af7a147d86064a21a94066f72173bba2">admin</nova:user>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:project uuid="aef0c6ba1dd54218a527ced3f8d2a1be">admin</nova:project>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="d3f57a9d-2502-43be-9afd-d2b6e1c15c08"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         <nova:port uuid="4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3">
Nov 25 10:40:30 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="192.168.0.183" ipVersion="4"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   </metadata>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <system>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <entry name="serial">613e6b77-82b6-426c-90b1-38d6776feb1f</entry>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <entry name="uuid">613e6b77-82b6-426c-90b1-38d6776feb1f</entry>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </system>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <os>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   </os>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <features>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <apic/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   </features>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   </clock>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <target dev="vdb" bus="virtio"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.config"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:fa:5f:ba"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <target dev="tap4aa1b3c5-4e"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/console.log" append="off"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </serial>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <video>
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </video>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 10:40:30 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 10:40:30 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 10:40:30 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:40:30 compute-0 nova_compute[189381]: </domain>
Nov 25 10:40:30 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.223 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Preparing to wait for external event network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.223 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.224 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.224 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.225 189385 DEBUG nova.virt.libvirt.vif [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj',id=3,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-lse7ova1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:40:27Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODg0MjI1ODAzNzA1MzM1NzY2NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 25 10:40:30 compute-0 nova_compute[189381]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODg0MjI1ODAzNzA1MzM1NzY2NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=613e6b77-82b6-426c-90b1-38d6776feb1f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.226 189385 DEBUG nova.network.os_vif_util [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.227 189385 DEBUG nova.network.os_vif_util [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fa:5f:ba,bridge_name='br-int',has_traffic_filtering=True,id=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4aa1b3c5-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.227 189385 DEBUG os_vif [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:5f:ba,bridge_name='br-int',has_traffic_filtering=True,id=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4aa1b3c5-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.228 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.229 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.230 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.233 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.234 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4aa1b3c5-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.234 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4aa1b3c5-4e, col_values=(('external_ids', {'iface-id': '4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fa:5f:ba', 'vm-uuid': '613e6b77-82b6-426c-90b1-38d6776feb1f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.237 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.238 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:40:30 compute-0 NetworkManager[56317]: <info>  [1764067230.2382] manager: (tap4aa1b3c5-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.246 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.247 189385 INFO os_vif [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fa:5f:ba,bridge_name='br-int',has_traffic_filtering=True,id=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4aa1b3c5-4e')
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.287 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.288 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.288 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.288 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No VIF found with MAC fa:16:3e:fa:5f:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.289 189385 INFO nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Using config drive
Nov 25 10:40:30 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:40:30.193 189385 DEBUG nova.virt.libvirt.vif [None req-4c257803-f5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:40:30 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:40:30.225 189385 DEBUG nova.virt.libvirt.vif [None req-4c257803-f5 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.820 189385 INFO nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Creating config drive at /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.config
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.826 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptutdgeei execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:40:30 compute-0 nova_compute[189381]: 2025-11-25 10:40:30.974 189385 DEBUG oslo_concurrency.processutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptutdgeei" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:40:31 compute-0 kernel: tap4aa1b3c5-4e: entered promiscuous mode
Nov 25 10:40:31 compute-0 ovn_controller[97779]: 2025-11-25T10:40:31Z|00040|binding|INFO|Claiming lport 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 for this chassis.
Nov 25 10:40:31 compute-0 ovn_controller[97779]: 2025-11-25T10:40:31Z|00041|binding|INFO|4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3: Claiming fa:16:3e:fa:5f:ba 192.168.0.183
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.081 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:31 compute-0 NetworkManager[56317]: <info>  [1764067231.0857] manager: (tap4aa1b3c5-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.096 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:5f:ba 192.168.0.183'], port_security=['fa:16:3e:fa:5f:ba 192.168.0.183'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-6oeui4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-port-tknjl3ychzd2', 'neutron:cidrs': '192.168.0.183/24', 'neutron:device_id': '613e6b77-82b6-426c-90b1-38d6776feb1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-6oeui4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-port-tknjl3ychzd2', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.098 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e bound to our chassis
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.099 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:40:31 compute-0 ovn_controller[97779]: 2025-11-25T10:40:31Z|00042|binding|INFO|Setting lport 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 ovn-installed in OVS
Nov 25 10:40:31 compute-0 ovn_controller[97779]: 2025-11-25T10:40:31Z|00043|binding|INFO|Setting lport 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 up in Southbound
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.105 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.117 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.118 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[27aeefb9-bdb8-4bae-acb8-3b6d82c17856]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:40:31 compute-0 systemd-udevd[242782]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:40:31 compute-0 systemd-machined[155706]: New machine qemu-3-instance-00000003.
Nov 25 10:40:31 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 25 10:40:31 compute-0 NetworkManager[56317]: <info>  [1764067231.1615] device (tap4aa1b3c5-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:40:31 compute-0 NetworkManager[56317]: <info>  [1764067231.1630] device (tap4aa1b3c5-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.168 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[125859f0-2377-4cf5-8b4f-9ae8a5f0963f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.176 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cbb7c5-57f3-453e-859f-4aab15100d57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.223 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[c8fc7460-fd92-43c6-a39a-6754a84cbe42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.250 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff3fc7b-262c-4505-a2aa-cd458bc90149]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 35658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242795, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.272 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e30cdf4b-a5d6-447c-8c6d-e49181d46ca4]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369763, 'tstamp': 369763}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242797, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369766, 'tstamp': 369766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242797, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.275 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.277 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.279 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.279 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35870011-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.279 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.280 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35870011-20, col_values=(('external_ids', {'iface-id': '20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:40:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:31.280 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:40:31 compute-0 openstack_network_exporter[205722]: ERROR   10:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:40:31 compute-0 openstack_network_exporter[205722]: ERROR   10:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:40:31 compute-0 openstack_network_exporter[205722]: ERROR   10:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:40:31 compute-0 openstack_network_exporter[205722]: ERROR   10:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:40:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:40:31 compute-0 openstack_network_exporter[205722]: ERROR   10:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:40:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.441 189385 DEBUG nova.compute.manager [req-fefd2cee-024a-4fbd-b274-f2b5e1571e90 req-b448d078-cb98-4bd5-9512-bc51bed780ac d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received event network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.442 189385 DEBUG oslo_concurrency.lockutils [req-fefd2cee-024a-4fbd-b274-f2b5e1571e90 req-b448d078-cb98-4bd5-9512-bc51bed780ac d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.442 189385 DEBUG oslo_concurrency.lockutils [req-fefd2cee-024a-4fbd-b274-f2b5e1571e90 req-b448d078-cb98-4bd5-9512-bc51bed780ac d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.443 189385 DEBUG oslo_concurrency.lockutils [req-fefd2cee-024a-4fbd-b274-f2b5e1571e90 req-b448d078-cb98-4bd5-9512-bc51bed780ac d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.443 189385 DEBUG nova.compute.manager [req-fefd2cee-024a-4fbd-b274-f2b5e1571e90 req-b448d078-cb98-4bd5-9512-bc51bed780ac d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Processing event network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.557 189385 DEBUG nova.network.neutron [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updated VIF entry in instance network info cache for port 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.559 189385 DEBUG nova.network.neutron [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updating instance_info_cache with network_info: [{"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.577 189385 DEBUG oslo_concurrency.lockutils [req-058970d6-0536-45b0-96ca-804b088cc9aa req-5085ba7f-6c17-4f47-bb9b-a7d23c3f1398 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.665 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.666 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067231.665074, 613e6b77-82b6-426c-90b1-38d6776feb1f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.667 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] VM Started (Lifecycle Event)
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.671 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.677 189385 INFO nova.virt.libvirt.driver [-] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Instance spawned successfully.
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.677 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.700 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.709 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.713 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.714 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.715 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.715 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.716 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.716 189385 DEBUG nova.virt.libvirt.driver [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.735 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.736 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067231.6651967, 613e6b77-82b6-426c-90b1-38d6776feb1f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.736 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] VM Paused (Lifecycle Event)
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.766 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.772 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067231.6698494, 613e6b77-82b6-426c-90b1-38d6776feb1f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.773 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] VM Resumed (Lifecycle Event)
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.807 189385 INFO nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Took 4.46 seconds to spawn the instance on the hypervisor.
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.807 189385 DEBUG nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.874 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.888 189385 INFO nova.compute.manager [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Took 5.24 seconds to build instance.
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.892 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:40:31 compute-0 nova_compute[189381]: 2025-11-25 10:40:31.915 189385 DEBUG oslo_concurrency.lockutils [None req-4c257803-f544-466d-81a6-4f5aebe18ca4 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:32 compute-0 nova_compute[189381]: 2025-11-25 10:40:32.641 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:33 compute-0 sshd-session[242805]: Connection closed by authenticating user root 171.244.51.45 port 48424 [preauth]
Nov 25 10:40:33 compute-0 nova_compute[189381]: 2025-11-25 10:40:33.526 189385 DEBUG nova.compute.manager [req-032b2a38-7fc7-49f4-a9f5-fa1d22c603bf req-f1ae0789-9fb8-410c-a248-02e9400ce54d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received event network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:40:33 compute-0 nova_compute[189381]: 2025-11-25 10:40:33.527 189385 DEBUG oslo_concurrency.lockutils [req-032b2a38-7fc7-49f4-a9f5-fa1d22c603bf req-f1ae0789-9fb8-410c-a248-02e9400ce54d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:33 compute-0 nova_compute[189381]: 2025-11-25 10:40:33.527 189385 DEBUG oslo_concurrency.lockutils [req-032b2a38-7fc7-49f4-a9f5-fa1d22c603bf req-f1ae0789-9fb8-410c-a248-02e9400ce54d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:33 compute-0 nova_compute[189381]: 2025-11-25 10:40:33.528 189385 DEBUG oslo_concurrency.lockutils [req-032b2a38-7fc7-49f4-a9f5-fa1d22c603bf req-f1ae0789-9fb8-410c-a248-02e9400ce54d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:33 compute-0 nova_compute[189381]: 2025-11-25 10:40:33.528 189385 DEBUG nova.compute.manager [req-032b2a38-7fc7-49f4-a9f5-fa1d22c603bf req-f1ae0789-9fb8-410c-a248-02e9400ce54d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] No waiting events found dispatching network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:40:33 compute-0 nova_compute[189381]: 2025-11-25 10:40:33.529 189385 WARNING nova.compute.manager [req-032b2a38-7fc7-49f4-a9f5-fa1d22c603bf req-f1ae0789-9fb8-410c-a248-02e9400ce54d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received unexpected event network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 for instance with vm_state active and task_state None.
Nov 25 10:40:34 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 10:40:34 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 10:40:35 compute-0 nova_compute[189381]: 2025-11-25 10:40:35.239 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:35 compute-0 podman[242826]: 2025-11-25 10:40:35.990965098 +0000 UTC m=+0.100338147 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:40:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:36.041 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:40:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:36.042 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:40:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:40:36.043 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:40:37 compute-0 nova_compute[189381]: 2025-11-25 10:40:37.645 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:40 compute-0 nova_compute[189381]: 2025-11-25 10:40:40.248 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:40 compute-0 podman[242847]: 2025-11-25 10:40:40.377266559 +0000 UTC m=+0.094029213 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:40:40 compute-0 podman[242846]: 2025-11-25 10:40:40.397011825 +0000 UTC m=+0.107928568 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Nov 25 10:40:42 compute-0 nova_compute[189381]: 2025-11-25 10:40:42.648 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:43 compute-0 podman[242890]: 2025-11-25 10:40:43.034626317 +0000 UTC m=+0.146354049 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 10:40:44 compute-0 podman[242916]: 2025-11-25 10:40:44.993701832 +0000 UTC m=+0.101562932 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Nov 25 10:40:45 compute-0 nova_compute[189381]: 2025-11-25 10:40:45.254 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:47 compute-0 nova_compute[189381]: 2025-11-25 10:40:47.652 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:50 compute-0 nova_compute[189381]: 2025-11-25 10:40:50.260 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:50 compute-0 podman[242937]: 2025-11-25 10:40:50.970451531 +0000 UTC m=+0.077695766 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:40:52 compute-0 nova_compute[189381]: 2025-11-25 10:40:52.655 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:55 compute-0 nova_compute[189381]: 2025-11-25 10:40:55.265 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:57 compute-0 nova_compute[189381]: 2025-11-25 10:40:57.658 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:40:57 compute-0 podman[242962]: 2025-11-25 10:40:57.985384173 +0000 UTC m=+0.089578183 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:40:58 compute-0 podman[242961]: 2025-11-25 10:40:58.028319325 +0000 UTC m=+0.134910745 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:40:59 compute-0 podman[203557]: time="2025-11-25T10:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:40:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:40:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 25 10:41:00 compute-0 podman[242997]: 2025-11-25 10:41:00.000737376 +0000 UTC m=+0.104473307 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, config_id=edpm, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 25 10:41:00 compute-0 nova_compute[189381]: 2025-11-25 10:41:00.270 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:01 compute-0 ovn_controller[97779]: 2025-11-25T10:41:01Z|00044|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 25 10:41:01 compute-0 openstack_network_exporter[205722]: ERROR   10:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:41:01 compute-0 openstack_network_exporter[205722]: ERROR   10:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:41:01 compute-0 openstack_network_exporter[205722]: ERROR   10:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:41:01 compute-0 openstack_network_exporter[205722]: ERROR   10:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:41:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:41:01 compute-0 openstack_network_exporter[205722]: ERROR   10:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:41:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:41:02 compute-0 nova_compute[189381]: 2025-11-25 10:41:02.660 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.331 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.332 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.351 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'name': 'vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.355 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 613e6b77-82b6-426c-90b1-38d6776feb1f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.357 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/613e6b77-82b6-426c-90b1-38d6776feb1f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.976 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 25 Nov 2025 10:41:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d315982d-83f3-4779-bcb8-ffc74d93a0d8 x-openstack-request-id: req-d315982d-83f3-4779-bcb8-ffc74d93a0d8 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.976 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "613e6b77-82b6-426c-90b1-38d6776feb1f", "name": "vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj", "status": "ACTIVE", "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "user_id": "af7a147d86064a21a94066f72173bba2", "metadata": {"metering.server_group": "d1a74954-729e-4b7f-a26d-ccdc925aa15b"}, "hostId": "5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd", "image": {"id": "d3f57a9d-2502-43be-9afd-d2b6e1c15c08", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d3f57a9d-2502-43be-9afd-d2b6e1c15c08"}]}, "flavor": {"id": "8b869036-db8e-4fd3-b57a-e59e272f3c73", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8b869036-db8e-4fd3-b57a-e59e272f3c73"}]}, "created": "2025-11-25T10:40:24Z", "updated": "2025-11-25T10:40:31Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.183", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:5f:ba"}, {"version": 4, "addr": "192.168.122.189", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fa:5f:ba"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/613e6b77-82b6-426c-90b1-38d6776feb1f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/613e6b77-82b6-426c-90b1-38d6776feb1f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-25T10:40:31.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.976 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/613e6b77-82b6-426c-90b1-38d6776feb1f used request id req-d315982d-83f3-4779-bcb8-ffc74d93a0d8 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.978 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '613e6b77-82b6-426c-90b1-38d6776feb1f', 'name': 'vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.981 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.981 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:41:03.981501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.985 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes volume: 4740 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.988 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 613e6b77-82b6-426c-90b1-38d6776feb1f / tap4aa1b3c5-4e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.988 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.991 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.991 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.992 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.992 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.992 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.993 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.993 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.992 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:41:03.992038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:03.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:41:03.993471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.014 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/memory.usage volume: 49.1484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.038 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/memory.usage volume: 33.296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.060 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.061 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.061 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.061 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj>]
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.062 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T10:41:04.061494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.063 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes volume: 4975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.063 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.064 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:41:04.063257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.065 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.065 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:41:04.065619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.066 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.066 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.067 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.068 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.068 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:41:04.067505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.069 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.069 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/cpu volume: 322720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.070 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/cpu volume: 31590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.070 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 42750000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:41:04.069681) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.072 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.072 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.072 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:41:04.071942) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.073 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:41:04.074189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.099 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.100 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.100 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.124 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.125 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.125 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.153 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.154 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.154 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.155 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:41:04.155930) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.222 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.223 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.224 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.299 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.299 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.300 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.376 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.377 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.377 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.377 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.378 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.378 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 1593102466 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.379 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 365927498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.379 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 408314029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.379 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 474937372 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.379 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.379 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 4243844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.380 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.380 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.380 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:41:04.378488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.381 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.381 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.382 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.382 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.382 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.383 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.383 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.383 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.383 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.383 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.384 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.385 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:41:04.381272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.385 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.385 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:41:04.385013) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.386 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.386 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.386 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.386 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.387 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.387 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.387 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.388 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:41:04.388137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.388 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.388 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.389 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.389 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.389 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.389 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.390 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.390 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.390 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.391 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 31878521808 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.391 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 231382257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:41:04.391177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.391 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.392 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.392 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.392 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.392 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.392 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.393 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.393 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.394 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.394 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.394 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:41:04.394172) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.395 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.395 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.396 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:41:04.395667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.396 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.396 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.396 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.397 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.397 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.397 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.397 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.398 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.398 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.398 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.399 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.399 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:41:04.398701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.400 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.401 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:41:04.400336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.401 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.401 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.402 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.402 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.402 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.403 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.403 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T10:41:04.403471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.403 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj>]
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:41:04.404763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.406 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets volume: 34 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:41:04.406146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.406 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.407 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:41:04.408019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.409 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.409 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:41:04.409302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.409 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.410 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.411 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:41:04.410969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.411 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.411 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.412 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.413 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:41:04.414 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:41:05 compute-0 nova_compute[189381]: 2025-11-25 10:41:05.276 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:06 compute-0 nova_compute[189381]: 2025-11-25 10:41:06.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:06 compute-0 ovn_controller[97779]: 2025-11-25T10:41:06Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fa:5f:ba 192.168.0.183
Nov 25 10:41:06 compute-0 ovn_controller[97779]: 2025-11-25T10:41:06Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fa:5f:ba 192.168.0.183
Nov 25 10:41:06 compute-0 podman[243026]: 2025-11-25 10:41:06.967500132 +0000 UTC m=+0.078880201 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.045 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.046 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.180 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.247 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.248 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.318 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.320 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.386 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.387 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.465 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.474 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.555 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.557 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.623 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.625 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.663 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.719 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.720 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.788 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.801 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.886 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.888 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.954 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:07 compute-0 nova_compute[189381]: 2025-11-25 10:41:07.956 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.022 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.025 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.095 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.456 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.458 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4888MB free_disk=72.16501998901367GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.459 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.459 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.534 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.535 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.536 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.536 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.537 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.616 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.629 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.657 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:41:08 compute-0 nova_compute[189381]: 2025-11-25 10:41:08.658 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:41:09 compute-0 nova_compute[189381]: 2025-11-25 10:41:09.658 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:10 compute-0 nova_compute[189381]: 2025-11-25 10:41:10.281 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:10 compute-0 podman[243083]: 2025-11-25 10:41:10.986227216 +0000 UTC m=+0.090705636 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:41:10 compute-0 podman[243082]: 2025-11-25 10:41:10.987143993 +0000 UTC m=+0.089902783 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6)
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.666 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.811 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.812 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.813 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:41:12 compute-0 nova_compute[189381]: 2025-11-25 10:41:12.815 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:41:14 compute-0 podman[243126]: 2025-11-25 10:41:14.008988688 +0000 UTC m=+0.119404813 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.014 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.030 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.030 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.031 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.032 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.032 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.033 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:41:15 compute-0 nova_compute[189381]: 2025-11-25 10:41:15.285 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:15 compute-0 podman[243152]: 2025-11-25 10:41:15.978242901 +0000 UTC m=+0.086971517 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:41:17 compute-0 nova_compute[189381]: 2025-11-25 10:41:17.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:41:17 compute-0 nova_compute[189381]: 2025-11-25 10:41:17.669 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:20 compute-0 nova_compute[189381]: 2025-11-25 10:41:20.290 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:21 compute-0 podman[243173]: 2025-11-25 10:41:21.991127863 +0000 UTC m=+0.098520224 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:41:22 compute-0 nova_compute[189381]: 2025-11-25 10:41:22.675 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:25 compute-0 nova_compute[189381]: 2025-11-25 10:41:25.298 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:27 compute-0 nova_compute[189381]: 2025-11-25 10:41:27.677 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:28 compute-0 podman[243199]: 2025-11-25 10:41:28.983011552 +0000 UTC m=+0.083349621 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 10:41:28 compute-0 podman[243198]: 2025-11-25 10:41:28.984232698 +0000 UTC m=+0.092078416 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 10:41:29 compute-0 podman[203557]: time="2025-11-25T10:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:41:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:41:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Nov 25 10:41:30 compute-0 nova_compute[189381]: 2025-11-25 10:41:30.303 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:30 compute-0 podman[243234]: 2025-11-25 10:41:30.96275859 +0000 UTC m=+0.078500260 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release-0.7.12=)
Nov 25 10:41:31 compute-0 openstack_network_exporter[205722]: ERROR   10:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:41:31 compute-0 openstack_network_exporter[205722]: ERROR   10:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:41:31 compute-0 openstack_network_exporter[205722]: ERROR   10:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:41:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:41:31 compute-0 openstack_network_exporter[205722]: ERROR   10:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:41:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:41:31 compute-0 openstack_network_exporter[205722]: ERROR   10:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:41:32 compute-0 nova_compute[189381]: 2025-11-25 10:41:32.680 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:35 compute-0 nova_compute[189381]: 2025-11-25 10:41:35.308 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:41:36.042 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:41:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:41:36.043 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:41:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:41:36.043 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:41:37 compute-0 nova_compute[189381]: 2025-11-25 10:41:37.684 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:37 compute-0 podman[243253]: 2025-11-25 10:41:37.961795566 +0000 UTC m=+0.069659832 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 10:41:40 compute-0 nova_compute[189381]: 2025-11-25 10:41:40.313 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:42 compute-0 podman[243271]: 2025-11-25 10:41:42.008809385 +0000 UTC m=+0.114413517 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 10:41:42 compute-0 podman[243272]: 2025-11-25 10:41:42.015972304 +0000 UTC m=+0.113647195 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:41:42 compute-0 nova_compute[189381]: 2025-11-25 10:41:42.687 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:44 compute-0 podman[243312]: 2025-11-25 10:41:44.811664954 +0000 UTC m=+0.130512386 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 10:41:45 compute-0 nova_compute[189381]: 2025-11-25 10:41:45.315 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:46 compute-0 podman[243338]: 2025-11-25 10:41:46.962104401 +0000 UTC m=+0.078152240 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 10:41:47 compute-0 nova_compute[189381]: 2025-11-25 10:41:47.686 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:50 compute-0 nova_compute[189381]: 2025-11-25 10:41:50.316 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:52 compute-0 nova_compute[189381]: 2025-11-25 10:41:52.688 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:52 compute-0 podman[243361]: 2025-11-25 10:41:52.956266557 +0000 UTC m=+0.069923480 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:41:55 compute-0 nova_compute[189381]: 2025-11-25 10:41:55.320 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:57 compute-0 nova_compute[189381]: 2025-11-25 10:41:57.691 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:41:59 compute-0 podman[203557]: time="2025-11-25T10:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:41:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:41:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 25 10:42:00 compute-0 podman[243386]: 2025-11-25 10:42:00.001241154 +0000 UTC m=+0.094565128 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm)
Nov 25 10:42:00 compute-0 podman[243387]: 2025-11-25 10:42:00.00041745 +0000 UTC m=+0.091566921 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:42:00 compute-0 nova_compute[189381]: 2025-11-25 10:42:00.328 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:01 compute-0 openstack_network_exporter[205722]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:42:01 compute-0 openstack_network_exporter[205722]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:42:01 compute-0 openstack_network_exporter[205722]: ERROR   10:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:42:01 compute-0 openstack_network_exporter[205722]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:42:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:42:01 compute-0 openstack_network_exporter[205722]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:42:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:42:01 compute-0 podman[243425]: 2025-11-25 10:42:01.975214925 +0000 UTC m=+0.085991649 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_id=edpm, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:42:02 compute-0 nova_compute[189381]: 2025-11-25 10:42:02.693 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:05 compute-0 nova_compute[189381]: 2025-11-25 10:42:05.338 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:07 compute-0 nova_compute[189381]: 2025-11-25 10:42:07.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:07 compute-0 nova_compute[189381]: 2025-11-25 10:42:07.696 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.051 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.052 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.052 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.053 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.304 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.364 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.366 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.433 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.434 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.502 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.504 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.568 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.576 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.637 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.638 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.700 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.701 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.766 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.768 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.863 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.880 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.950 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:08 compute-0 nova_compute[189381]: 2025-11-25 10:42:08.952 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:08 compute-0 podman[243468]: 2025-11-25 10:42:08.977045387 +0000 UTC m=+0.093684821 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.020 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.021 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.085 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.086 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.187 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.546 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.548 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4871MB free_disk=72.16314697265625GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.548 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.549 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.626 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.626 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.627 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.627 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.628 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.718 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.737 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.740 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:42:09 compute-0 nova_compute[189381]: 2025-11-25 10:42:09.741 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:10 compute-0 nova_compute[189381]: 2025-11-25 10:42:10.344 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:12 compute-0 nova_compute[189381]: 2025-11-25 10:42:12.699 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:12 compute-0 podman[243499]: 2025-11-25 10:42:12.971184568 +0000 UTC m=+0.080359208 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, name=ubi9-minimal, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64)
Nov 25 10:42:12 compute-0 podman[243500]: 2025-11-25 10:42:12.991198543 +0000 UTC m=+0.084323202 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:42:14 compute-0 nova_compute[189381]: 2025-11-25 10:42:14.736 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:14 compute-0 nova_compute[189381]: 2025-11-25 10:42:14.737 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:14 compute-0 nova_compute[189381]: 2025-11-25 10:42:14.737 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:42:14 compute-0 podman[243544]: 2025-11-25 10:42:14.996797193 +0000 UTC m=+0.106964162 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 10:42:15 compute-0 nova_compute[189381]: 2025-11-25 10:42:15.011 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:42:15 compute-0 nova_compute[189381]: 2025-11-25 10:42:15.011 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:42:15 compute-0 nova_compute[189381]: 2025-11-25 10:42:15.011 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:42:15 compute-0 nova_compute[189381]: 2025-11-25 10:42:15.348 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:16 compute-0 nova_compute[189381]: 2025-11-25 10:42:16.975 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:42:16 compute-0 nova_compute[189381]: 2025-11-25 10:42:16.987 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:42:16 compute-0 nova_compute[189381]: 2025-11-25 10:42:16.987 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:42:16 compute-0 nova_compute[189381]: 2025-11-25 10:42:16.988 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:16 compute-0 nova_compute[189381]: 2025-11-25 10:42:16.988 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:16 compute-0 nova_compute[189381]: 2025-11-25 10:42:16.988 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:16 compute-0 nova_compute[189381]: 2025-11-25 10:42:16.989 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:42:17 compute-0 nova_compute[189381]: 2025-11-25 10:42:17.701 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:17 compute-0 podman[243570]: 2025-11-25 10:42:17.969290724 +0000 UTC m=+0.076497238 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:42:18 compute-0 nova_compute[189381]: 2025-11-25 10:42:18.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:20 compute-0 nova_compute[189381]: 2025-11-25 10:42:20.353 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:20.642 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:42:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:20.643 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:42:20 compute-0 nova_compute[189381]: 2025-11-25 10:42:20.646 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:22 compute-0 nova_compute[189381]: 2025-11-25 10:42:22.703 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:23 compute-0 nova_compute[189381]: 2025-11-25 10:42:23.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:42:23 compute-0 podman[243592]: 2025-11-25 10:42:23.956435545 +0000 UTC m=+0.069229029 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:42:24 compute-0 nova_compute[189381]: 2025-11-25 10:42:24.986 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:24 compute-0 nova_compute[189381]: 2025-11-25 10:42:24.987 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.004 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.096 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.098 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.109 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.110 189385 INFO nova.compute.claims [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.292 189385 DEBUG nova.compute.provider_tree [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.306 189385 DEBUG nova.scheduler.client.report [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.329 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.330 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.356 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.370 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.371 189385 DEBUG nova.network.neutron [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.386 189385 INFO nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.423 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.503 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.505 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.506 189385 INFO nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Creating image(s)
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.506 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.507 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.507 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.521 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.581 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.582 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "efa46ac01001129056abbd05fc9719c35c46db87" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.584 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.602 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.668 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.670 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.843 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87,backing_fmt=raw /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk 1073741824" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.845 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "efa46ac01001129056abbd05fc9719c35c46db87" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.846 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.914 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.917 189385 DEBUG nova.virt.disk.api [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Checking if we can resize image /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.918 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.986 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.987 189385 DEBUG nova.virt.disk.api [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Cannot resize image /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 10:42:25 compute-0 nova_compute[189381]: 2025-11-25 10:42:25.988 189385 DEBUG nova.objects.instance [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'migration_context' on Instance uuid 83ab44b9-7ddb-4994-9415-20b7dd9c081c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.007 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.008 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.009 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.029 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.106 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.108 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.109 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.128 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.199 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.200 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.430 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 1073741824" returned: 0 in 0.230s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.432 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.323s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.434 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.497 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.499 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.499 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Ensure instance console log exists: /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.500 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.500 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:26 compute-0 nova_compute[189381]: 2025-11-25 10:42:26.501 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:27 compute-0 nova_compute[189381]: 2025-11-25 10:42:27.709 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:28.645 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.801 189385 DEBUG nova.network.neutron [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Successfully updated port: 51ae07e4-a2d5-4ea0-8a58-37fa22980090 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.820 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.820 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquired lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.821 189385 DEBUG nova.network.neutron [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.898 189385 DEBUG nova.compute.manager [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received event network-changed-51ae07e4-a2d5-4ea0-8a58-37fa22980090 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.899 189385 DEBUG nova.compute.manager [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Refreshing instance network info cache due to event network-changed-51ae07e4-a2d5-4ea0-8a58-37fa22980090. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.899 189385 DEBUG oslo_concurrency.lockutils [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:42:28 compute-0 nova_compute[189381]: 2025-11-25 10:42:28.968 189385 DEBUG nova.network.neutron [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:42:29 compute-0 podman[203557]: time="2025-11-25T10:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:42:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:42:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.170 189385 DEBUG nova.network.neutron [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updating instance_info_cache with network_info: [{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.213 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Releasing lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.214 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Instance network_info: |[{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.214 189385 DEBUG oslo_concurrency.lockutils [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.215 189385 DEBUG nova.network.neutron [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Refreshing network info cache for port 51ae07e4-a2d5-4ea0-8a58-37fa22980090 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.218 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Start _get_guest_xml network_info=[{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 1, 'device_type': 'disk', 'encrypted': False, 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.227 189385 WARNING nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.245 189385 DEBUG nova.virt.libvirt.host [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.246 189385 DEBUG nova.virt.libvirt.host [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.256 189385 DEBUG nova.virt.libvirt.host [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.256 189385 DEBUG nova.virt.libvirt.host [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.257 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.257 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:31:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='8b869036-db8e-4fd3-b57a-e59e272f3c73',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:31:35Z,direct_url=<?>,disk_format='qcow2',id=d3f57a9d-2502-43be-9afd-d2b6e1c15c08,min_disk=0,min_ram=0,name='cirros',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:31:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.257 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.258 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.258 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.258 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.258 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.259 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.259 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.259 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.260 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.260 189385 DEBUG nova.virt.hardware [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.264 189385 DEBUG nova.virt.libvirt.vif [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum',id=4,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-ljsskeb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:42:25Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjk0MjIzNjQ4OTcxODQ5Mjk5NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Nov 25 10:42:30 compute-0 nova_compute[189381]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjk0MjIzNjQ4OTcxODQ5Mjk5NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=83ab44b9-7ddb-4994-9415-20b7dd9c081c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.265 189385 DEBUG nova.network.os_vif_util [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.265 189385 DEBUG nova.network.os_vif_util [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c3:2b,bridge_name='br-int',has_traffic_filtering=True,id=51ae07e4-a2d5-4ea0-8a58-37fa22980090,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap51ae07e4-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.266 189385 DEBUG nova.objects.instance [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'pci_devices' on Instance uuid 83ab44b9-7ddb-4994-9415-20b7dd9c081c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.281 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <uuid>83ab44b9-7ddb-4994-9415-20b7dd9c081c</uuid>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <name>instance-00000004</name>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <memory>524288</memory>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <metadata>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <nova:name>vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum</nova:name>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 10:42:30</nova:creationTime>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <nova:flavor name="m1.small">
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:memory>512</nova:memory>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:ephemeral>1</nova:ephemeral>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:user uuid="af7a147d86064a21a94066f72173bba2">admin</nova:user>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:project uuid="aef0c6ba1dd54218a527ced3f8d2a1be">admin</nova:project>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="d3f57a9d-2502-43be-9afd-d2b6e1c15c08"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         <nova:port uuid="51ae07e4-a2d5-4ea0-8a58-37fa22980090">
Nov 25 10:42:30 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="192.168.0.243" ipVersion="4"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   </metadata>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <system>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <entry name="serial">83ab44b9-7ddb-4994-9415-20b7dd9c081c</entry>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <entry name="uuid">83ab44b9-7ddb-4994-9415-20b7dd9c081c</entry>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </system>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <os>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   </os>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <features>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <apic/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   </features>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   </clock>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <target dev="vdb" bus="virtio"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.config"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:0e:c3:2b"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <target dev="tap51ae07e4-a2"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </interface>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/console.log" append="off"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </serial>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <video>
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </video>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 10:42:30 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 10:42:30 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 10:42:30 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:42:30 compute-0 nova_compute[189381]: </domain>
Nov 25 10:42:30 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.282 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Preparing to wait for external event network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.282 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.282 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.282 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.283 189385 DEBUG nova.virt.libvirt.vif [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum',id=4,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-ljsskeb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T10:42:25Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjk0MjIzNjQ4OTcxODQ5Mjk5NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Nov 25 10:42:30 compute-0 nova_compute[189381]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjk0MjIzNjQ4OTcxODQ5Mjk5NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=83ab44b9-7ddb-4994-9415-20b7dd9c081c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.283 189385 DEBUG nova.network.os_vif_util [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.284 189385 DEBUG nova.network.os_vif_util [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c3:2b,bridge_name='br-int',has_traffic_filtering=True,id=51ae07e4-a2d5-4ea0-8a58-37fa22980090,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap51ae07e4-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.284 189385 DEBUG os_vif [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c3:2b,bridge_name='br-int',has_traffic_filtering=True,id=51ae07e4-a2d5-4ea0-8a58-37fa22980090,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap51ae07e4-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.284 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.285 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.285 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.288 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.288 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51ae07e4-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.288 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51ae07e4-a2, col_values=(('external_ids', {'iface-id': '51ae07e4-a2d5-4ea0-8a58-37fa22980090', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:c3:2b', 'vm-uuid': '83ab44b9-7ddb-4994-9415-20b7dd9c081c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.290 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:30 compute-0 NetworkManager[56317]: <info>  [1764067350.2915] manager: (tap51ae07e4-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.292 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.299 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.300 189385 INFO os_vif [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:c3:2b,bridge_name='br-int',has_traffic_filtering=True,id=51ae07e4-a2d5-4ea0-8a58-37fa22980090,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap51ae07e4-a2')
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.541 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.541 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.541 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.542 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No VIF found with MAC fa:16:3e:0e:c3:2b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 10:42:30 compute-0 nova_compute[189381]: 2025-11-25 10:42:30.543 189385 INFO nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Using config drive
Nov 25 10:42:30 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:42:30.264 189385 DEBUG nova.virt.libvirt.vif [None req-4433acb7-3b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:42:30 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:42:30.283 189385 DEBUG nova.virt.libvirt.vif [None req-4433acb7-3b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:42:30 compute-0 podman[243645]: 2025-11-25 10:42:30.979282351 +0000 UTC m=+0.085204347 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:42:30 compute-0 podman[243646]: 2025-11-25 10:42:30.98448226 +0000 UTC m=+0.090599802 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.236 189385 INFO nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Creating config drive at /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.config
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.243 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoot_43mx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.382 189385 DEBUG oslo_concurrency.processutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoot_43mx" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:42:31 compute-0 openstack_network_exporter[205722]: ERROR   10:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:42:31 compute-0 openstack_network_exporter[205722]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:42:31 compute-0 openstack_network_exporter[205722]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:42:31 compute-0 openstack_network_exporter[205722]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:42:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:42:31 compute-0 openstack_network_exporter[205722]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:42:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:42:31 compute-0 NetworkManager[56317]: <info>  [1764067351.4591] manager: (tap51ae07e4-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 25 10:42:31 compute-0 kernel: tap51ae07e4-a2: entered promiscuous mode
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.462 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:31 compute-0 ovn_controller[97779]: 2025-11-25T10:42:31Z|00045|binding|INFO|Claiming lport 51ae07e4-a2d5-4ea0-8a58-37fa22980090 for this chassis.
Nov 25 10:42:31 compute-0 ovn_controller[97779]: 2025-11-25T10:42:31Z|00046|binding|INFO|51ae07e4-a2d5-4ea0-8a58-37fa22980090: Claiming fa:16:3e:0e:c3:2b 192.168.0.243
Nov 25 10:42:31 compute-0 ovn_controller[97779]: 2025-11-25T10:42:31Z|00047|binding|INFO|Setting lport 51ae07e4-a2d5-4ea0-8a58-37fa22980090 ovn-installed in OVS
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.482 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:31 compute-0 systemd-udevd[243701]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 10:42:31 compute-0 systemd-machined[155706]: New machine qemu-4-instance-00000004.
Nov 25 10:42:31 compute-0 NetworkManager[56317]: <info>  [1764067351.5132] device (tap51ae07e4-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 10:42:31 compute-0 NetworkManager[56317]: <info>  [1764067351.5139] device (tap51ae07e4-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 10:42:31 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.526 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c3:2b 192.168.0.243'], port_security=['fa:16:3e:0e:c3:2b 192.168.0.243'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-6oeui4yfk7wn-wt3ljj7puxet-54ctihgnfppt-port-xs3cpczjijad', 'neutron:cidrs': '192.168.0.243/24', 'neutron:device_id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-6oeui4yfk7wn-wt3ljj7puxet-54ctihgnfppt-port-xs3cpczjijad', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=51ae07e4-a2d5-4ea0-8a58-37fa22980090) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.527 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 51ae07e4-a2d5-4ea0-8a58-37fa22980090 in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e bound to our chassis
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.528 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:42:31 compute-0 ovn_controller[97779]: 2025-11-25T10:42:31Z|00048|binding|INFO|Setting lport 51ae07e4-a2d5-4ea0-8a58-37fa22980090 up in Southbound
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.550 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e947b71d-483e-46d7-a9d3-3b6f3e1600b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.583 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[e586c42c-2c21-4e39-b69a-da7ff5bd3ff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.586 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[0aba8b8e-e846-4dc8-bfae-b2319c0be0af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.617 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[46fafdba-127b-4981-a4e2-7c47028b4967]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.634 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[ebaeaed9-56d1-4047-9174-edf99e82cbb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 35658, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243716, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.652 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[84040a32-fb4f-4dd7-bceb-a4f9748cfc26]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369763, 'tstamp': 369763}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243717, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369766, 'tstamp': 369766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243717, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.654 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.656 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.657 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.659 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35870011-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.660 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.661 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35870011-20, col_values=(('external_ids', {'iface-id': '20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:42:31 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:31.662 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.744 189385 DEBUG nova.network.neutron [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updated VIF entry in instance network info cache for port 51ae07e4-a2d5-4ea0-8a58-37fa22980090. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.744 189385 DEBUG nova.network.neutron [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updating instance_info_cache with network_info: [{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.771 189385 DEBUG oslo_concurrency.lockutils [req-f5b135a8-d19f-4dd1-acbe-311a0d05d9a8 req-ece4a9f9-ddeb-4ac1-a39e-44bf7430d234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.984 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067351.9840875, 83ab44b9-7ddb-4994-9415-20b7dd9c081c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:42:31 compute-0 nova_compute[189381]: 2025-11-25 10:42:31.985 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] VM Started (Lifecycle Event)
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.019 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.030 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067351.9841905, 83ab44b9-7ddb-4994-9415-20b7dd9c081c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.030 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] VM Paused (Lifecycle Event)
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.057 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.063 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.081 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.109 189385 DEBUG nova.compute.manager [req-7064a71c-2371-4090-98ac-9a6510c784d4 req-f34d15cb-5c7d-4c98-a498-d12aceb32e0b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received event network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.110 189385 DEBUG oslo_concurrency.lockutils [req-7064a71c-2371-4090-98ac-9a6510c784d4 req-f34d15cb-5c7d-4c98-a498-d12aceb32e0b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.110 189385 DEBUG oslo_concurrency.lockutils [req-7064a71c-2371-4090-98ac-9a6510c784d4 req-f34d15cb-5c7d-4c98-a498-d12aceb32e0b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.110 189385 DEBUG oslo_concurrency.lockutils [req-7064a71c-2371-4090-98ac-9a6510c784d4 req-f34d15cb-5c7d-4c98-a498-d12aceb32e0b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.110 189385 DEBUG nova.compute.manager [req-7064a71c-2371-4090-98ac-9a6510c784d4 req-f34d15cb-5c7d-4c98-a498-d12aceb32e0b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Processing event network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.111 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.116 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067352.116077, 83ab44b9-7ddb-4994-9415-20b7dd9c081c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.116 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] VM Resumed (Lifecycle Event)
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.119 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.125 189385 INFO nova.virt.libvirt.driver [-] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Instance spawned successfully.
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.126 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.133 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.148 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.155 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.155 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.156 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.157 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.157 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.158 189385 DEBUG nova.virt.libvirt.driver [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.167 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.229 189385 INFO nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Took 6.72 seconds to spawn the instance on the hypervisor.
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.230 189385 DEBUG nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.378 189385 INFO nova.compute.manager [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Took 7.32 seconds to build instance.
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.429 189385 DEBUG oslo_concurrency.lockutils [None req-4433acb7-3b28-4cd9-a5bb-117fe74f6734 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:32 compute-0 nova_compute[189381]: 2025-11-25 10:42:32.711 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:33 compute-0 podman[243725]: 2025-11-25 10:42:32.999789629 +0000 UTC m=+0.109464514 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:42:34 compute-0 nova_compute[189381]: 2025-11-25 10:42:34.198 189385 DEBUG nova.compute.manager [req-1fe38f46-1c1f-4d85-a779-92d1284e0402 req-9677e2a5-399f-4c18-849a-8382aaae87f7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received event network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:42:34 compute-0 nova_compute[189381]: 2025-11-25 10:42:34.199 189385 DEBUG oslo_concurrency.lockutils [req-1fe38f46-1c1f-4d85-a779-92d1284e0402 req-9677e2a5-399f-4c18-849a-8382aaae87f7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:34 compute-0 nova_compute[189381]: 2025-11-25 10:42:34.199 189385 DEBUG oslo_concurrency.lockutils [req-1fe38f46-1c1f-4d85-a779-92d1284e0402 req-9677e2a5-399f-4c18-849a-8382aaae87f7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:34 compute-0 nova_compute[189381]: 2025-11-25 10:42:34.200 189385 DEBUG oslo_concurrency.lockutils [req-1fe38f46-1c1f-4d85-a779-92d1284e0402 req-9677e2a5-399f-4c18-849a-8382aaae87f7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:34 compute-0 nova_compute[189381]: 2025-11-25 10:42:34.200 189385 DEBUG nova.compute.manager [req-1fe38f46-1c1f-4d85-a779-92d1284e0402 req-9677e2a5-399f-4c18-849a-8382aaae87f7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] No waiting events found dispatching network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:42:34 compute-0 nova_compute[189381]: 2025-11-25 10:42:34.200 189385 WARNING nova.compute.manager [req-1fe38f46-1c1f-4d85-a779-92d1284e0402 req-9677e2a5-399f-4c18-849a-8382aaae87f7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received unexpected event network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 for instance with vm_state active and task_state None.
Nov 25 10:42:35 compute-0 nova_compute[189381]: 2025-11-25 10:42:35.293 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:36.044 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:42:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:36.044 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:42:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:42:36.046 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:42:37 compute-0 nova_compute[189381]: 2025-11-25 10:42:37.713 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:39 compute-0 podman[243747]: 2025-11-25 10:42:39.963142567 +0000 UTC m=+0.070454684 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:42:40 compute-0 nova_compute[189381]: 2025-11-25 10:42:40.297 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:42 compute-0 nova_compute[189381]: 2025-11-25 10:42:42.716 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:43 compute-0 podman[243767]: 2025-11-25 10:42:43.962508369 +0000 UTC m=+0.071850964 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:42:43 compute-0 podman[243766]: 2025-11-25 10:42:43.993369305 +0000 UTC m=+0.107648252 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal)
Nov 25 10:42:45 compute-0 nova_compute[189381]: 2025-11-25 10:42:45.303 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:46 compute-0 podman[243807]: 2025-11-25 10:42:46.007784238 +0000 UTC m=+0.124288830 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:42:47 compute-0 nova_compute[189381]: 2025-11-25 10:42:47.718 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:48 compute-0 podman[243833]: 2025-11-25 10:42:48.95627543 +0000 UTC m=+0.069763734 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 10:42:50 compute-0 nova_compute[189381]: 2025-11-25 10:42:50.309 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:52 compute-0 nova_compute[189381]: 2025-11-25 10:42:52.719 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:54 compute-0 podman[243854]: 2025-11-25 10:42:54.985515329 +0000 UTC m=+0.098563171 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:42:55 compute-0 nova_compute[189381]: 2025-11-25 10:42:55.314 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:57 compute-0 nova_compute[189381]: 2025-11-25 10:42:57.722 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:42:59 compute-0 podman[203557]: time="2025-11-25T10:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:42:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:42:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Nov 25 10:43:00 compute-0 nova_compute[189381]: 2025-11-25 10:43:00.321 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:01 compute-0 openstack_network_exporter[205722]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:43:01 compute-0 openstack_network_exporter[205722]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:43:01 compute-0 openstack_network_exporter[205722]: ERROR   10:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:43:01 compute-0 openstack_network_exporter[205722]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:43:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:43:01 compute-0 openstack_network_exporter[205722]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:43:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:43:01 compute-0 ovn_controller[97779]: 2025-11-25T10:43:01Z|00049|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Nov 25 10:43:01 compute-0 podman[243877]: 2025-11-25 10:43:01.971766744 +0000 UTC m=+0.080720308 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 10:43:01 compute-0 podman[243876]: 2025-11-25 10:43:01.988626238 +0000 UTC m=+0.101955398 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 25 10:43:02 compute-0 nova_compute[189381]: 2025-11-25 10:43:02.725 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.332 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.332 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081076e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.339 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'name': 'vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.342 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '613e6b77-82b6-426c-90b1-38d6776feb1f', 'name': 'vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.344 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 10:43:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:03.345 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/83ab44b9-7ddb-4994-9415-20b7dd9c081c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 10:43:04 compute-0 podman[243912]: 2025-11-25 10:43:04.031019305 +0000 UTC m=+0.132522836 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64)
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.176 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 25 Nov 2025 10:43:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-bdd9e647-1054-4de3-a0be-40d2ab8cbd63 x-openstack-request-id: req-bdd9e647-1054-4de3-a0be-40d2ab8cbd63 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.176 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "83ab44b9-7ddb-4994-9415-20b7dd9c081c", "name": "vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum", "status": "ACTIVE", "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "user_id": "af7a147d86064a21a94066f72173bba2", "metadata": {"metering.server_group": "d1a74954-729e-4b7f-a26d-ccdc925aa15b"}, "hostId": "5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd", "image": {"id": "d3f57a9d-2502-43be-9afd-d2b6e1c15c08", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/d3f57a9d-2502-43be-9afd-d2b6e1c15c08"}]}, "flavor": {"id": "8b869036-db8e-4fd3-b57a-e59e272f3c73", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/8b869036-db8e-4fd3-b57a-e59e272f3c73"}]}, "created": "2025-11-25T10:42:24Z", "updated": "2025-11-25T10:42:32Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.243", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0e:c3:2b"}, {"version": 4, "addr": "192.168.122.220", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0e:c3:2b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/83ab44b9-7ddb-4994-9415-20b7dd9c081c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/83ab44b9-7ddb-4994-9415-20b7dd9c081c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-25T10:42:32.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.176 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/83ab44b9-7ddb-4994-9415-20b7dd9c081c used request id req-bdd9e647-1054-4de3-a0be-40d2ab8cbd63 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.178 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'name': 'vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.181 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.182 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:43:04.182363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.186 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes volume: 7412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.191 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.195 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 83ab44b9-7ddb-4994-9415-20b7dd9c081c / tap51ae07e4-a2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.195 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.199 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.199 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.199 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes.delta volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.200 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:43:04.200206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.201 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.201 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:43:04.202004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.223 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/memory.usage volume: 48.97265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.251 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.274 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/memory.usage volume: 33.2890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.297 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.298 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T10:43:04.298477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.298 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.300 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum>]
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.300 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.300 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.301 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.301 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:43:04.301077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.301 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.301 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.302 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.302 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.302 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.303 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.303 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:43:04.302833) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.303 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.303 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.304 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.305 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.305 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.306 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:43:04.304514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.306 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/cpu volume: 408250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.306 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:43:04.306592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.307 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/cpu volume: 35350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.307 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/cpu volume: 31470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.307 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 44130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.308 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.309 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.309 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:43:04.308688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.309 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:43:04.310666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.341 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.342 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.342 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.364 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.365 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.365 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.389 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.389 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.390 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.408 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.408 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.409 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:43:04.410070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.481 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.481 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.482 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.553 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.553 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.554 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.633 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 18388992 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.633 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 4096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.633 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.703 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.703 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.704 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.705 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 1593102466 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.705 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 365927498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:43:04.705270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.706 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 408314029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.706 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 625402940 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.706 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 104257328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.706 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 84305615 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.706 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 455493259 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.707 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 904806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.707 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 1914765 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.707 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.707 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.707 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.708 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.709 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.709 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.709 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.709 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.710 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:43:04.708763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.710 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 583 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.711 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.711 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.711 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.711 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.712 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.712 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.713 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.713 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.713 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:43:04.713047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.714 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.714 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.714 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.714 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.714 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.715 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.715 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.715 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.715 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.716 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.717 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.717 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:43:04.716713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.717 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.717 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 41783296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.718 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.718 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.718 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.718 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.718 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.719 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.719 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.719 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 31880690541 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:43:04.720522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.721 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 231382257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.721 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.721 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 1614620919 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.721 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 10993280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.721 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.722 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.722 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.722 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.722 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.722 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.723 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.723 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.724 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.724 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:43:04.724082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.724 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.724 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.725 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.725 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.726 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:43:04.725809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.726 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.726 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.726 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.727 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.727 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.727 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.727 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.728 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.728 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.728 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:43:04.729494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.730 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes.delta volume: 1480 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.730 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.730 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.731 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.731 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:43:04.731689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.732 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.732 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.732 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.732 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.733 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.733 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.733 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.733 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.734 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.734 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.734 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T10:43:04.735633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.736 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum>]
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.736 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:43:04.736663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:43:04.738069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.738 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.738 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.738 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.739 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.740 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:43:04.740784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.742 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.742 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:43:04.742156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.742 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.742 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.742 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.743 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.743 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.744 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.744 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.744 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:43:04.744017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.745 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.745 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.746 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.747 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:43:04.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:43:05 compute-0 nova_compute[189381]: 2025-11-25 10:43:05.323 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:05 compute-0 ovn_controller[97779]: 2025-11-25T10:43:05Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:c3:2b 192.168.0.243
Nov 25 10:43:05 compute-0 ovn_controller[97779]: 2025-11-25T10:43:05Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:c3:2b 192.168.0.243
Nov 25 10:43:07 compute-0 nova_compute[189381]: 2025-11-25 10:43:07.727 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.047 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.047 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.048 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.158 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.228 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.229 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.296 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.297 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.364 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.365 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.436 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.444 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.508 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.510 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.579 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.583 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.651 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.653 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.715 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.722 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.792 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.794 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.860 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.862 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.928 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:09 compute-0 nova_compute[189381]: 2025-11-25 10:43:09.929 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.001 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.008 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.074 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.075 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.144 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.146 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.209 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.210 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.274 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.328 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.669 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.670 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4649MB free_disk=72.14109802246094GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.671 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.671 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.779 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.780 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.780 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.780 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.781 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.781 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.800 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.821 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.822 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.849 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:43:10 compute-0 nova_compute[189381]: 2025-11-25 10:43:10.869 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:43:10 compute-0 podman[243998]: 2025-11-25 10:43:10.968250113 +0000 UTC m=+0.076160868 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 10:43:11 compute-0 nova_compute[189381]: 2025-11-25 10:43:11.173 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:43:11 compute-0 nova_compute[189381]: 2025-11-25 10:43:11.190 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:43:11 compute-0 nova_compute[189381]: 2025-11-25 10:43:11.218 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:43:11 compute-0 nova_compute[189381]: 2025-11-25 10:43:11.219 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:43:12 compute-0 nova_compute[189381]: 2025-11-25 10:43:12.220 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:12 compute-0 nova_compute[189381]: 2025-11-25 10:43:12.731 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:14 compute-0 nova_compute[189381]: 2025-11-25 10:43:14.026 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:14 compute-0 nova_compute[189381]: 2025-11-25 10:43:14.027 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:43:14 compute-0 podman[244019]: 2025-11-25 10:43:14.777304811 +0000 UTC m=+0.084034404 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 25 10:43:14 compute-0 podman[244020]: 2025-11-25 10:43:14.781238123 +0000 UTC m=+0.083175219 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:43:15 compute-0 nova_compute[189381]: 2025-11-25 10:43:15.031 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:43:15 compute-0 nova_compute[189381]: 2025-11-25 10:43:15.031 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:43:15 compute-0 nova_compute[189381]: 2025-11-25 10:43:15.032 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:43:15 compute-0 nova_compute[189381]: 2025-11-25 10:43:15.333 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:17 compute-0 podman[244065]: 2025-11-25 10:43:17.010313601 +0000 UTC m=+0.123391394 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Nov 25 10:43:17 compute-0 nova_compute[189381]: 2025-11-25 10:43:17.734 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:18 compute-0 nova_compute[189381]: 2025-11-25 10:43:18.689 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updating instance_info_cache with network_info: [{"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:43:18 compute-0 nova_compute[189381]: 2025-11-25 10:43:18.711 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:43:18 compute-0 nova_compute[189381]: 2025-11-25 10:43:18.711 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:43:18 compute-0 nova_compute[189381]: 2025-11-25 10:43:18.711 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:18 compute-0 nova_compute[189381]: 2025-11-25 10:43:18.711 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:18 compute-0 nova_compute[189381]: 2025-11-25 10:43:18.712 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:18 compute-0 nova_compute[189381]: 2025-11-25 10:43:18.712 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:43:19 compute-0 nova_compute[189381]: 2025-11-25 10:43:19.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:19 compute-0 nova_compute[189381]: 2025-11-25 10:43:19.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:43:19 compute-0 podman[244091]: 2025-11-25 10:43:19.968711384 +0000 UTC m=+0.074502280 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 10:43:20 compute-0 nova_compute[189381]: 2025-11-25 10:43:20.335 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:22 compute-0 nova_compute[189381]: 2025-11-25 10:43:22.737 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:25 compute-0 nova_compute[189381]: 2025-11-25 10:43:25.339 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:25 compute-0 podman[244111]: 2025-11-25 10:43:25.959394071 +0000 UTC m=+0.068102306 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:43:27 compute-0 nova_compute[189381]: 2025-11-25 10:43:27.739 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:29 compute-0 podman[203557]: time="2025-11-25T10:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:43:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:43:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Nov 25 10:43:30 compute-0 nova_compute[189381]: 2025-11-25 10:43:30.343 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:31 compute-0 openstack_network_exporter[205722]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:43:31 compute-0 openstack_network_exporter[205722]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:43:31 compute-0 openstack_network_exporter[205722]: ERROR   10:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:43:31 compute-0 openstack_network_exporter[205722]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:43:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:43:31 compute-0 openstack_network_exporter[205722]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:43:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:43:32 compute-0 nova_compute[189381]: 2025-11-25 10:43:32.742 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:32 compute-0 podman[244134]: 2025-11-25 10:43:32.953832897 +0000 UTC m=+0.070829234 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 25 10:43:32 compute-0 podman[244135]: 2025-11-25 10:43:32.97411359 +0000 UTC m=+0.085090274 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:43:34 compute-0 podman[244171]: 2025-11-25 10:43:34.987234107 +0000 UTC m=+0.099639832 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1214.1726694543)
Nov 25 10:43:35 compute-0 nova_compute[189381]: 2025-11-25 10:43:35.346 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:43:36.045 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:43:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:43:36.046 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:43:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:43:36.046 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:43:37 compute-0 nova_compute[189381]: 2025-11-25 10:43:37.744 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:40 compute-0 nova_compute[189381]: 2025-11-25 10:43:40.351 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:41 compute-0 podman[244194]: 2025-11-25 10:43:41.984970451 +0000 UTC m=+0.088794160 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:43:42 compute-0 nova_compute[189381]: 2025-11-25 10:43:42.747 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:44 compute-0 podman[244213]: 2025-11-25 10:43:44.997233603 +0000 UTC m=+0.108476465 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:43:44 compute-0 podman[244214]: 2025-11-25 10:43:44.998464159 +0000 UTC m=+0.105400367 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:43:45 compute-0 nova_compute[189381]: 2025-11-25 10:43:45.356 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:47 compute-0 nova_compute[189381]: 2025-11-25 10:43:47.750 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:48 compute-0 podman[244257]: 2025-11-25 10:43:48.01941802 +0000 UTC m=+0.131327821 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 25 10:43:50 compute-0 nova_compute[189381]: 2025-11-25 10:43:50.358 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:50 compute-0 podman[244282]: 2025-11-25 10:43:50.991416286 +0000 UTC m=+0.103411200 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:43:52 compute-0 nova_compute[189381]: 2025-11-25 10:43:52.752 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:55 compute-0 nova_compute[189381]: 2025-11-25 10:43:55.363 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:56 compute-0 podman[244301]: 2025-11-25 10:43:56.940922041 +0000 UTC m=+0.055795782 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:43:57 compute-0 nova_compute[189381]: 2025-11-25 10:43:57.755 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:43:58 compute-0 sshd-session[244323]: Connection closed by authenticating user root 171.244.51.45 port 57846 [preauth]
Nov 25 10:43:59 compute-0 podman[203557]: time="2025-11-25T10:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:43:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:43:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 25 10:44:00 compute-0 nova_compute[189381]: 2025-11-25 10:44:00.377 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:01 compute-0 nova_compute[189381]: 2025-11-25 10:44:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:01 compute-0 nova_compute[189381]: 2025-11-25 10:44:01.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:44:01 compute-0 openstack_network_exporter[205722]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:44:01 compute-0 openstack_network_exporter[205722]: ERROR   10:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:44:01 compute-0 openstack_network_exporter[205722]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:44:01 compute-0 openstack_network_exporter[205722]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:44:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:44:01 compute-0 openstack_network_exporter[205722]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:44:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:44:02 compute-0 nova_compute[189381]: 2025-11-25 10:44:02.757 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:03 compute-0 podman[244326]: 2025-11-25 10:44:03.988233795 +0000 UTC m=+0.088801151 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 25 10:44:03 compute-0 podman[244325]: 2025-11-25 10:44:03.993109255 +0000 UTC m=+0.082251192 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 25 10:44:05 compute-0 nova_compute[189381]: 2025-11-25 10:44:05.385 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:05 compute-0 podman[244362]: 2025-11-25 10:44:05.984951389 +0000 UTC m=+0.087607116 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, com.redhat.component=ubi9-container)
Nov 25 10:44:07 compute-0 nova_compute[189381]: 2025-11-25 10:44:07.759 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.036 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.073 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.074 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.074 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.075 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.183 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.263 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.265 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.328 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.330 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.391 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.392 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.459 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.468 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.537 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.538 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.602 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.604 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.674 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.675 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.741 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.749 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.823 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.824 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.886 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.887 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.953 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:09 compute-0 nova_compute[189381]: 2025-11-25 10:44:09.954 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.016 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.022 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.081 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.082 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.145 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.146 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.224 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.226 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.290 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.389 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.652 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.654 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4634MB free_disk=72.14111709594727GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.654 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.654 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.953 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.953 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.953 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.954 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.954 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:44:10 compute-0 nova_compute[189381]: 2025-11-25 10:44:10.954 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:44:11 compute-0 nova_compute[189381]: 2025-11-25 10:44:11.285 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:44:11 compute-0 nova_compute[189381]: 2025-11-25 10:44:11.311 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:44:11 compute-0 nova_compute[189381]: 2025-11-25 10:44:11.313 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:44:11 compute-0 nova_compute[189381]: 2025-11-25 10:44:11.314 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:44:11 compute-0 nova_compute[189381]: 2025-11-25 10:44:11.315 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:11 compute-0 nova_compute[189381]: 2025-11-25 10:44:11.315 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:44:11 compute-0 nova_compute[189381]: 2025-11-25 10:44:11.430 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:44:12 compute-0 nova_compute[189381]: 2025-11-25 10:44:12.417 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:12 compute-0 nova_compute[189381]: 2025-11-25 10:44:12.419 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:12 compute-0 nova_compute[189381]: 2025-11-25 10:44:12.762 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:12 compute-0 podman[244429]: 2025-11-25 10:44:12.971845533 +0000 UTC m=+0.086543616 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.025 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.238 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.239 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.240 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.241 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:44:15 compute-0 nova_compute[189381]: 2025-11-25 10:44:15.395 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:15 compute-0 podman[244449]: 2025-11-25 10:44:15.971892036 +0000 UTC m=+0.072228067 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:44:15 compute-0 podman[244448]: 2025-11-25 10:44:15.97273177 +0000 UTC m=+0.078707213 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Nov 25 10:44:17 compute-0 nova_compute[189381]: 2025-11-25 10:44:17.149 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:44:17 compute-0 nova_compute[189381]: 2025-11-25 10:44:17.167 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:44:17 compute-0 nova_compute[189381]: 2025-11-25 10:44:17.168 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:44:17 compute-0 nova_compute[189381]: 2025-11-25 10:44:17.168 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:17 compute-0 nova_compute[189381]: 2025-11-25 10:44:17.169 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:17 compute-0 nova_compute[189381]: 2025-11-25 10:44:17.169 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:44:17 compute-0 nova_compute[189381]: 2025-11-25 10:44:17.764 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:18 compute-0 nova_compute[189381]: 2025-11-25 10:44:18.160 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:19 compute-0 podman[244492]: 2025-11-25 10:44:19.006973552 +0000 UTC m=+0.113025820 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 10:44:19 compute-0 nova_compute[189381]: 2025-11-25 10:44:19.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:19 compute-0 nova_compute[189381]: 2025-11-25 10:44:19.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:20 compute-0 nova_compute[189381]: 2025-11-25 10:44:20.398 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:21 compute-0 podman[244517]: 2025-11-25 10:44:21.969989136 +0000 UTC m=+0.084939723 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:44:22 compute-0 nova_compute[189381]: 2025-11-25 10:44:22.766 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:25 compute-0 nova_compute[189381]: 2025-11-25 10:44:25.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:25 compute-0 nova_compute[189381]: 2025-11-25 10:44:25.401 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:27 compute-0 nova_compute[189381]: 2025-11-25 10:44:27.768 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:27 compute-0 podman[244538]: 2025-11-25 10:44:27.94809274 +0000 UTC m=+0.058971376 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:44:29 compute-0 podman[203557]: time="2025-11-25T10:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:44:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:44:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 25 10:44:30 compute-0 nova_compute[189381]: 2025-11-25 10:44:30.407 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:31 compute-0 nova_compute[189381]: 2025-11-25 10:44:31.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:44:31 compute-0 openstack_network_exporter[205722]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:44:31 compute-0 openstack_network_exporter[205722]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:44:31 compute-0 openstack_network_exporter[205722]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:44:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:44:31 compute-0 openstack_network_exporter[205722]: ERROR   10:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:44:31 compute-0 openstack_network_exporter[205722]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:44:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:44:32 compute-0 nova_compute[189381]: 2025-11-25 10:44:32.771 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:34 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 10:44:34 compute-0 podman[244564]: 2025-11-25 10:44:34.287305021 +0000 UTC m=+0.077266602 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 10:44:34 compute-0 podman[244563]: 2025-11-25 10:44:34.290496813 +0000 UTC m=+0.085877930 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 10:44:35 compute-0 nova_compute[189381]: 2025-11-25 10:44:35.412 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:44:36.046 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:44:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:44:36.047 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:44:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:44:36.047 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:44:37 compute-0 podman[244598]: 2025-11-25 10:44:37.005809608 +0000 UTC m=+0.103243479 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:44:37 compute-0 nova_compute[189381]: 2025-11-25 10:44:37.774 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:40 compute-0 nova_compute[189381]: 2025-11-25 10:44:40.415 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:42 compute-0 nova_compute[189381]: 2025-11-25 10:44:42.776 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:43 compute-0 podman[244618]: 2025-11-25 10:44:43.975170352 +0000 UTC m=+0.081061102 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 10:44:45 compute-0 nova_compute[189381]: 2025-11-25 10:44:45.419 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:46 compute-0 podman[244638]: 2025-11-25 10:44:46.958775008 +0000 UTC m=+0.069933211 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:44:46 compute-0 podman[244637]: 2025-11-25 10:44:46.960873039 +0000 UTC m=+0.074209205 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 10:44:47 compute-0 nova_compute[189381]: 2025-11-25 10:44:47.778 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:50 compute-0 podman[244682]: 2025-11-25 10:44:50.004246752 +0000 UTC m=+0.116395217 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:44:50 compute-0 nova_compute[189381]: 2025-11-25 10:44:50.422 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:52 compute-0 nova_compute[189381]: 2025-11-25 10:44:52.781 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:52 compute-0 podman[244707]: 2025-11-25 10:44:52.961020356 +0000 UTC m=+0.073075492 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 10:44:55 compute-0 nova_compute[189381]: 2025-11-25 10:44:55.425 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:57 compute-0 nova_compute[189381]: 2025-11-25 10:44:57.784 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:44:58 compute-0 podman[244726]: 2025-11-25 10:44:58.965298773 +0000 UTC m=+0.067256615 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:44:59 compute-0 podman[203557]: time="2025-11-25T10:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:44:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:44:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.206 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.233 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 31174924-a3e8-4662-baad-ac9aa49c01ab _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.234 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 44e7d3d0-d059-412e-a1a9-467d774d2bee _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.235 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 613e6b77-82b6-426c-90b1-38d6776feb1f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.235 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 83ab44b9-7ddb-4994-9415-20b7dd9c081c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.236 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.236 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.237 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.237 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.238 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.238 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.239 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.240 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.398 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.402 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.164s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.418 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.420 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:45:00 compute-0 nova_compute[189381]: 2025-11-25 10:45:00.428 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:01 compute-0 openstack_network_exporter[205722]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:45:01 compute-0 openstack_network_exporter[205722]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:45:01 compute-0 openstack_network_exporter[205722]: ERROR   10:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:45:01 compute-0 openstack_network_exporter[205722]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:45:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:45:01 compute-0 openstack_network_exporter[205722]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:45:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:45:02 compute-0 nova_compute[189381]: 2025-11-25 10:45:02.786 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.332 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.333 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.338 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'name': 'vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.340 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '613e6b77-82b6-426c-90b1-38d6776feb1f', 'name': 'vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.343 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'name': 'vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.345 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.345 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:45:03.345912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.349 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes volume: 7482 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.353 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.355 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.359 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.359 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.360 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.360 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:45:03.360191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.360 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.360 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes.delta volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.361 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.361 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:45:03.361961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.382 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/memory.usage volume: 48.96484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.411 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.434 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.457 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.458 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.459 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.459 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.459 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:45:03.458622) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:45:03.460719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.460 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.461 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.461 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.461 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.462 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.463 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.463 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:45:03.462579) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:45:03.464405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/cpu volume: 409460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.464 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/cpu volume: 36590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.465 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/cpu volume: 34480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.465 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 45330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.466 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.466 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.467 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.467 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.467 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.467 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:45:03.466736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:45:03.468600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.493 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.493 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.494 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.518 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.518 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.519 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.545 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.546 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.546 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.569 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.570 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.570 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:45:03.571909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.658 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.659 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.659 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.732 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.732 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.733 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.795 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.795 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.795 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.854 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.855 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.855 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.856 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.857 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 1593102466 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.857 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 365927498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:45:03.856893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.857 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.latency volume: 408314029 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.857 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 625402940 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.858 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 104257328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.858 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 84305615 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.858 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 567192189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.858 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 97341337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.858 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 75612085 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.859 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.859 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.859 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.861 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.861 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.861 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:45:03.860861) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.861 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.862 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.862 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.862 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.862 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.863 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.863 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.863 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.863 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.865 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.865 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:45:03.865196) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.866 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.866 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.866 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.866 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.867 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.867 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.867 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.867 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.867 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.868 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.869 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.869 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.869 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.869 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.870 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 41783296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.870 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.870 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:45:03.869289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.871 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.871 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.871 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.871 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.872 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.872 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.872 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.873 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.873 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 31882638657 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:45:03.873276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.873 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 231382257 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.874 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.874 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 1614620919 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.874 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 10993280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.874 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.874 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 1590671507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.875 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 14157667 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.875 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.876 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.876 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.877 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:45:03.878409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.878 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.879 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.879 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.880 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.880 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 238 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.880 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.881 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.881 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:45:03.880360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.881 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.881 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.881 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.882 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.882 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.882 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.882 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.883 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.883 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.883 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.884 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes.delta volume: 1438 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.885 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.885 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.885 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:45:03.884383) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:45:03.886118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.886 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.887 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.887 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.887 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.887 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.887 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.888 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.888 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.888 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.889 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.889 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.889 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.890 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.890 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.890 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.890 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.891 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:45:03.889501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:45:03.890773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.891 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.891 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.892 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:45:03.892329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:45:03.893574) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.894 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.894 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.895 14 DEBUG ceilometer.compute.pollsters [-] 44e7d3d0-d059-412e-a1a9-467d774d2bee/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:45:03.895070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.895 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.895 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.895 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.896 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.897 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:45:03.898 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:45:04 compute-0 podman[244752]: 2025-11-25 10:45:04.968296346 +0000 UTC m=+0.078969631 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 25 10:45:05 compute-0 podman[244751]: 2025-11-25 10:45:04.997958069 +0000 UTC m=+0.113034461 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 10:45:05 compute-0 nova_compute[189381]: 2025-11-25 10:45:05.430 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:07 compute-0 nova_compute[189381]: 2025-11-25 10:45:07.788 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:07 compute-0 podman[244790]: 2025-11-25 10:45:07.959247225 +0000 UTC m=+0.071975771 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.055 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.055 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.056 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.056 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.144 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.240 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.242 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.302 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.304 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.365 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.366 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.428 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.434 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.438 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.524 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.527 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.591 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.594 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.655 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.656 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.717 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.724 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.785 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.787 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.850 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.851 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.914 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.915 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.987 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:10 compute-0 nova_compute[189381]: 2025-11-25 10:45:10.993 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.051 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.052 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.137 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.139 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.204 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.206 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.269 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.636 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.637 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4623MB free_disk=72.1411361694336GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.637 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.638 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.746 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.746 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.746 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.747 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.747 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.747 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.876 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.887 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.888 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:45:11 compute-0 nova_compute[189381]: 2025-11-25 10:45:11.888 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:45:12 compute-0 nova_compute[189381]: 2025-11-25 10:45:12.790 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:12 compute-0 nova_compute[189381]: 2025-11-25 10:45:12.888 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:14 compute-0 podman[244856]: 2025-11-25 10:45:14.766015526 +0000 UTC m=+0.069374356 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 10:45:15 compute-0 nova_compute[189381]: 2025-11-25 10:45:15.440 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:17 compute-0 nova_compute[189381]: 2025-11-25 10:45:17.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:17 compute-0 nova_compute[189381]: 2025-11-25 10:45:17.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:45:17 compute-0 nova_compute[189381]: 2025-11-25 10:45:17.792 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:17 compute-0 podman[244875]: 2025-11-25 10:45:17.966729914 +0000 UTC m=+0.075475661 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:45:17 compute-0 podman[244876]: 2025-11-25 10:45:17.973938581 +0000 UTC m=+0.076732757 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:45:18 compute-0 nova_compute[189381]: 2025-11-25 10:45:18.159 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:45:18 compute-0 nova_compute[189381]: 2025-11-25 10:45:18.160 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:45:18 compute-0 nova_compute[189381]: 2025-11-25 10:45:18.160 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.869 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [{"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.881 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.882 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.883 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.883 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.883 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.883 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:19 compute-0 nova_compute[189381]: 2025-11-25 10:45:19.884 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:45:20 compute-0 nova_compute[189381]: 2025-11-25 10:45:20.445 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:20 compute-0 nova_compute[189381]: 2025-11-25 10:45:20.877 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:45:21 compute-0 podman[244920]: 2025-11-25 10:45:21.000194023 +0000 UTC m=+0.114958696 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:45:22 compute-0 nova_compute[189381]: 2025-11-25 10:45:22.793 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:23 compute-0 podman[244945]: 2025-11-25 10:45:23.989134771 +0000 UTC m=+0.093508849 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 25 10:45:25 compute-0 nova_compute[189381]: 2025-11-25 10:45:25.450 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:27 compute-0 nova_compute[189381]: 2025-11-25 10:45:27.795 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:29 compute-0 podman[203557]: time="2025-11-25T10:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:45:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:45:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 25 10:45:29 compute-0 podman[244964]: 2025-11-25 10:45:29.974451213 +0000 UTC m=+0.081397931 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:45:30 compute-0 nova_compute[189381]: 2025-11-25 10:45:30.456 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:31 compute-0 openstack_network_exporter[205722]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:45:31 compute-0 openstack_network_exporter[205722]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:45:31 compute-0 openstack_network_exporter[205722]: ERROR   10:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:45:31 compute-0 openstack_network_exporter[205722]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:45:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:45:31 compute-0 openstack_network_exporter[205722]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:45:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:45:32 compute-0 nova_compute[189381]: 2025-11-25 10:45:32.797 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:35 compute-0 nova_compute[189381]: 2025-11-25 10:45:35.460 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:35 compute-0 podman[244986]: 2025-11-25 10:45:35.978507033 +0000 UTC m=+0.092637814 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:45:35 compute-0 podman[244985]: 2025-11-25 10:45:35.981461968 +0000 UTC m=+0.095706303 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:45:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:45:36.047 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:45:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:45:36.048 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:45:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:45:36.048 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:45:37 compute-0 nova_compute[189381]: 2025-11-25 10:45:37.800 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:38 compute-0 podman[245020]: 2025-11-25 10:45:38.976683549 +0000 UTC m=+0.083034068 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:45:40 compute-0 nova_compute[189381]: 2025-11-25 10:45:40.463 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:42 compute-0 nova_compute[189381]: 2025-11-25 10:45:42.803 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:44 compute-0 podman[245040]: 2025-11-25 10:45:44.961034487 +0000 UTC m=+0.067639976 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:45:45 compute-0 nova_compute[189381]: 2025-11-25 10:45:45.468 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:47 compute-0 nova_compute[189381]: 2025-11-25 10:45:47.815 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:48 compute-0 podman[245060]: 2025-11-25 10:45:48.984791927 +0000 UTC m=+0.074606406 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:45:49 compute-0 podman[245059]: 2025-11-25 10:45:49.003257648 +0000 UTC m=+0.101927751 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6)
Nov 25 10:45:50 compute-0 nova_compute[189381]: 2025-11-25 10:45:50.473 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:51 compute-0 podman[245101]: 2025-11-25 10:45:51.992728543 +0000 UTC m=+0.100238723 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 10:45:52 compute-0 nova_compute[189381]: 2025-11-25 10:45:52.817 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:54 compute-0 podman[245125]: 2025-11-25 10:45:54.960874605 +0000 UTC m=+0.078283382 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:45:55 compute-0 nova_compute[189381]: 2025-11-25 10:45:55.478 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:57 compute-0 nova_compute[189381]: 2025-11-25 10:45:57.819 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:45:59 compute-0 podman[203557]: time="2025-11-25T10:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:45:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:45:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Nov 25 10:46:00 compute-0 nova_compute[189381]: 2025-11-25 10:46:00.482 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:00 compute-0 podman[245145]: 2025-11-25 10:46:00.978730441 +0000 UTC m=+0.094729884 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:46:01 compute-0 openstack_network_exporter[205722]: ERROR   10:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:46:01 compute-0 openstack_network_exporter[205722]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:46:01 compute-0 openstack_network_exporter[205722]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:46:01 compute-0 openstack_network_exporter[205722]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:46:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:46:01 compute-0 openstack_network_exporter[205722]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:46:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:46:02 compute-0 nova_compute[189381]: 2025-11-25 10:46:02.821 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:05 compute-0 nova_compute[189381]: 2025-11-25 10:46:05.485 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:06 compute-0 podman[245171]: 2025-11-25 10:46:06.971288241 +0000 UTC m=+0.075232504 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:46:06 compute-0 podman[245170]: 2025-11-25 10:46:06.98307869 +0000 UTC m=+0.096952318 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:46:07 compute-0 nova_compute[189381]: 2025-11-25 10:46:07.824 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:09 compute-0 podman[245205]: 2025-11-25 10:46:09.996450881 +0000 UTC m=+0.104609918 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, version=9.4, architecture=x86_64, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 25 10:46:10 compute-0 nova_compute[189381]: 2025-11-25 10:46:10.489 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.047 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.048 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.143 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.241 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.243 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.301 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.302 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.360 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.361 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.424 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.431 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.496 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.497 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.553 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.554 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.615 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.616 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.678 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.685 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.742 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.744 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.802 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.803 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.861 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.862 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.920 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.926 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.983 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:11 compute-0 nova_compute[189381]: 2025-11-25 10:46:11.984 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.043 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.045 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.106 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.107 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.180 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.540 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.541 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4625MB free_disk=72.14117431640625GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.542 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.542 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.827 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.827 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 44e7d3d0-d059-412e-a1a9-467d774d2bee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.827 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.829 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.829 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.829 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.832 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.918 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.929 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.931 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:46:12 compute-0 nova_compute[189381]: 2025-11-25 10:46:12.931 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:14 compute-0 nova_compute[189381]: 2025-11-25 10:46:14.931 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:15 compute-0 nova_compute[189381]: 2025-11-25 10:46:15.493 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:15 compute-0 podman[245274]: 2025-11-25 10:46:15.95166794 +0000 UTC m=+0.068331035 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.816 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.817 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.817 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.818 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.818 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.820 189385 INFO nova.compute.manager [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Terminating instance
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.821 189385 DEBUG nova.compute.manager [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.828 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:17 compute-0 kernel: tapc7376e3d-20 (unregistering): left promiscuous mode
Nov 25 10:46:17 compute-0 NetworkManager[56317]: <info>  [1764067577.8716] device (tapc7376e3d-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 10:46:17 compute-0 ovn_controller[97779]: 2025-11-25T10:46:17Z|00050|binding|INFO|Releasing lport c7376e3d-2069-45b2-a63a-2eefc475ad2b from this chassis (sb_readonly=0)
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.882 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:17 compute-0 ovn_controller[97779]: 2025-11-25T10:46:17Z|00051|binding|INFO|Setting lport c7376e3d-2069-45b2-a63a-2eefc475ad2b down in Southbound
Nov 25 10:46:17 compute-0 ovn_controller[97779]: 2025-11-25T10:46:17Z|00052|binding|INFO|Removing iface tapc7376e3d-20 ovn-installed in OVS
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.885 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:17 compute-0 nova_compute[189381]: 2025-11-25 10:46:17.897 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.901 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:45:ac 192.168.0.71'], port_security=['fa:16:3e:ab:45:ac 192.168.0.71'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-6oeui4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-port-clymc3k5eg3x', 'neutron:cidrs': '192.168.0.71/24', 'neutron:device_id': '44e7d3d0-d059-412e-a1a9-467d774d2bee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-6oeui4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-port-clymc3k5eg3x', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=c7376e3d-2069-45b2-a63a-2eefc475ad2b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.902 106634 INFO neutron.agent.ovn.metadata.agent [-] Port c7376e3d-2069-45b2-a63a-2eefc475ad2b in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e unbound from our chassis
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.903 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.918 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[ae810120-639a-40c3-909d-7725ae3ab3d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:46:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 25 10:46:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 59.590s CPU time.
Nov 25 10:46:17 compute-0 systemd-machined[155706]: Machine qemu-2-instance-00000002 terminated.
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.945 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d59bfa-c51d-44d2-8088-17e3ee498f89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.948 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[90d22788-1e6b-4506-ba61-7d2299d367c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.971 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[1686e529-900d-4c79-a91a-09b2e6ce26a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:46:17 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:17.987 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[588c15d6-8991-40ac-9633-2d5b6ce54ec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 27936, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245305, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.003 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1985d5-6404-43e3-96dd-d03880a35892]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369763, 'tstamp': 369763}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245306, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369766, 'tstamp': 369766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245306, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.005 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.007 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.014 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.015 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35870011-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.015 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.016 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35870011-20, col_values=(('external_ids', {'iface-id': '20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.016 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.044 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.053 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.100 189385 INFO nova.virt.libvirt.driver [-] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Instance destroyed successfully.
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.102 189385 DEBUG nova.objects.instance [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'resources' on Instance uuid 44e7d3d0-d059-412e-a1a9-467d774d2bee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.123 189385 DEBUG nova.virt.libvirt.vif [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T10:34:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-3t4zfpjeb7ff-ekuqttmklqsb-vnf-qma753sfy6ng',id=2,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T10:34:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-ske9c4nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T10:34:25Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzgzNDk1NTY2OTA3MDIwMDg5Nz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 25 10:46:18 compute-0 nova_compute[189381]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzgzNDk1NTY2OTA3MDIwMDg5Nz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc4MzQ5NTU2NjkwNzAyMDA4OTc9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03ODM0OTU1NjY5MDcwMjAwODk3PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=44e7d3d0-d059-412e-a1a9-467d774d2bee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.123 189385 DEBUG nova.network.os_vif_util [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "address": "fa:16:3e:ab:45:ac", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.71", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc7376e3d-20", "ovs_interfaceid": "c7376e3d-2069-45b2-a63a-2eefc475ad2b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.124 189385 DEBUG nova.network.os_vif_util [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:45:ac,bridge_name='br-int',has_traffic_filtering=True,id=c7376e3d-2069-45b2-a63a-2eefc475ad2b,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7376e3d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.125 189385 DEBUG os_vif [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:45:ac,bridge_name='br-int',has_traffic_filtering=True,id=c7376e3d-2069-45b2-a63a-2eefc475ad2b,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7376e3d-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.126 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.127 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc7376e3d-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.129 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.131 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.132 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.134 189385 INFO os_vif [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:45:ac,bridge_name='br-int',has_traffic_filtering=True,id=c7376e3d-2069-45b2-a63a-2eefc475ad2b,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc7376e3d-20')
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.135 189385 INFO nova.virt.libvirt.driver [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Deleting instance files /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee_del
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.135 189385 INFO nova.virt.libvirt.driver [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Deletion of /var/lib/nova/instances/44e7d3d0-d059-412e-a1a9-467d774d2bee_del complete
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.263 189385 DEBUG nova.virt.libvirt.host [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.264 189385 INFO nova.virt.libvirt.host [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] UEFI support detected
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.266 189385 INFO nova.compute.manager [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Took 0.44 seconds to destroy the instance on the hypervisor.
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.266 189385 DEBUG oslo.service.loopingcall [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.266 189385 DEBUG nova.compute.manager [-] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.267 189385 DEBUG nova.network.neutron [-] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.322 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.322 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.323 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:46:18 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:46:18.123 189385 DEBUG nova.virt.libvirt.vif [None req-acb45729-73 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.445 189385 DEBUG nova.compute.manager [req-c4497fc7-3b18-467f-bfca-d08dc6cf0233 req-54792941-c49a-4aae-bc02-a58d01115614 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received event network-vif-unplugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.446 189385 DEBUG oslo_concurrency.lockutils [req-c4497fc7-3b18-467f-bfca-d08dc6cf0233 req-54792941-c49a-4aae-bc02-a58d01115614 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.447 189385 DEBUG oslo_concurrency.lockutils [req-c4497fc7-3b18-467f-bfca-d08dc6cf0233 req-54792941-c49a-4aae-bc02-a58d01115614 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.447 189385 DEBUG oslo_concurrency.lockutils [req-c4497fc7-3b18-467f-bfca-d08dc6cf0233 req-54792941-c49a-4aae-bc02-a58d01115614 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.448 189385 DEBUG nova.compute.manager [req-c4497fc7-3b18-467f-bfca-d08dc6cf0233 req-54792941-c49a-4aae-bc02-a58d01115614 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] No waiting events found dispatching network-vif-unplugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.449 189385 DEBUG nova.compute.manager [req-c4497fc7-3b18-467f-bfca-d08dc6cf0233 req-54792941-c49a-4aae-bc02-a58d01115614 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received event network-vif-unplugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.550 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:46:18 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:18.551 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:46:18 compute-0 nova_compute[189381]: 2025-11-25 10:46:18.554 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.551 189385 DEBUG nova.network.neutron [-] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:46:19 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:19.553 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.567 189385 INFO nova.compute.manager [-] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Took 1.30 seconds to deallocate network for instance.
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.618 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.619 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.741 189385 DEBUG nova.compute.provider_tree [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.763 189385 DEBUG nova.scheduler.client.report [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.790 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.826 189385 INFO nova.scheduler.client.report [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Deleted allocations for instance 44e7d3d0-d059-412e-a1a9-467d774d2bee
Nov 25 10:46:19 compute-0 nova_compute[189381]: 2025-11-25 10:46:19.907 189385 DEBUG oslo_concurrency.lockutils [None req-acb45729-7320-40a2-870c-7fa275938065 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:19 compute-0 podman[245330]: 2025-11-25 10:46:19.967816832 +0000 UTC m=+0.078332153 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:46:19 compute-0 podman[245329]: 2025-11-25 10:46:19.978346085 +0000 UTC m=+0.091390088 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.293 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updating instance_info_cache with network_info: [{"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.320 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.320 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.320 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.321 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.321 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.321 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.604 189385 DEBUG nova.compute.manager [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received event network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.604 189385 DEBUG oslo_concurrency.lockutils [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.604 189385 DEBUG oslo_concurrency.lockutils [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.604 189385 DEBUG oslo_concurrency.lockutils [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "44e7d3d0-d059-412e-a1a9-467d774d2bee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.605 189385 DEBUG nova.compute.manager [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] No waiting events found dispatching network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.606 189385 WARNING nova.compute.manager [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received unexpected event network-vif-plugged-c7376e3d-2069-45b2-a63a-2eefc475ad2b for instance with vm_state deleted and task_state None.
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.607 189385 DEBUG nova.compute.manager [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Received event network-changed-c7376e3d-2069-45b2-a63a-2eefc475ad2b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.607 189385 DEBUG nova.compute.manager [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Refreshing instance network info cache due to event network-changed-c7376e3d-2069-45b2-a63a-2eefc475ad2b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.607 189385 DEBUG oslo_concurrency.lockutils [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.608 189385 DEBUG oslo_concurrency.lockutils [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.608 189385 DEBUG nova.network.neutron [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Refreshing network info cache for port c7376e3d-2069-45b2-a63a-2eefc475ad2b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:46:20 compute-0 nova_compute[189381]: 2025-11-25 10:46:20.859 189385 DEBUG nova.network.neutron [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:46:21 compute-0 nova_compute[189381]: 2025-11-25 10:46:21.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:21 compute-0 nova_compute[189381]: 2025-11-25 10:46:21.454 189385 DEBUG nova.network.neutron [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 25 10:46:21 compute-0 nova_compute[189381]: 2025-11-25 10:46:21.455 189385 DEBUG oslo_concurrency.lockutils [req-8910ec9e-134a-4b39-accd-0ff165195e2f req-dc22bd94-d39b-4994-b44c-dbc9267eef91 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-44e7d3d0-d059-412e-a1a9-467d774d2bee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:46:22 compute-0 nova_compute[189381]: 2025-11-25 10:46:22.830 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:22 compute-0 podman[245372]: 2025-11-25 10:46:22.977205131 +0000 UTC m=+0.096292419 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 10:46:23 compute-0 nova_compute[189381]: 2025-11-25 10:46:23.130 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:25 compute-0 podman[245399]: 2025-11-25 10:46:25.9946258 +0000 UTC m=+0.108418738 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 10:46:27 compute-0 nova_compute[189381]: 2025-11-25 10:46:27.832 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:28 compute-0 nova_compute[189381]: 2025-11-25 10:46:28.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:46:28 compute-0 nova_compute[189381]: 2025-11-25 10:46:28.132 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:29 compute-0 podman[203557]: time="2025-11-25T10:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:46:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:46:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 25 10:46:31 compute-0 openstack_network_exporter[205722]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:46:31 compute-0 openstack_network_exporter[205722]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:46:31 compute-0 openstack_network_exporter[205722]: ERROR   10:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:46:31 compute-0 openstack_network_exporter[205722]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:46:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:46:31 compute-0 openstack_network_exporter[205722]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:46:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:46:31 compute-0 podman[245420]: 2025-11-25 10:46:31.939196659 +0000 UTC m=+0.053114437 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:46:32 compute-0 nova_compute[189381]: 2025-11-25 10:46:32.833 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:33 compute-0 nova_compute[189381]: 2025-11-25 10:46:33.097 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764067578.0960271, 44e7d3d0-d059-412e-a1a9-467d774d2bee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:46:33 compute-0 nova_compute[189381]: 2025-11-25 10:46:33.097 189385 INFO nova.compute.manager [-] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] VM Stopped (Lifecycle Event)
Nov 25 10:46:33 compute-0 nova_compute[189381]: 2025-11-25 10:46:33.115 189385 DEBUG nova.compute.manager [None req-a093d6e3-7a4f-4425-8aa8-4f9c62708a35 - - - - - -] [instance: 44e7d3d0-d059-412e-a1a9-467d774d2bee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:46:33 compute-0 nova_compute[189381]: 2025-11-25 10:46:33.133 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:36.048 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:46:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:36.049 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:46:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:46:36.049 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:46:37 compute-0 nova_compute[189381]: 2025-11-25 10:46:37.836 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:37 compute-0 podman[245443]: 2025-11-25 10:46:37.986177074 +0000 UTC m=+0.099745558 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:46:37 compute-0 podman[245442]: 2025-11-25 10:46:37.998046796 +0000 UTC m=+0.112838006 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:46:38 compute-0 nova_compute[189381]: 2025-11-25 10:46:38.135 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:40 compute-0 podman[245481]: 2025-11-25 10:46:40.986830739 +0000 UTC m=+0.094070366 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Nov 25 10:46:42 compute-0 nova_compute[189381]: 2025-11-25 10:46:42.838 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:43 compute-0 nova_compute[189381]: 2025-11-25 10:46:43.137 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:45 compute-0 sshd-session[245480]: Connection reset by 205.210.31.220 port 63418 [preauth]
Nov 25 10:46:46 compute-0 podman[245502]: 2025-11-25 10:46:46.950293223 +0000 UTC m=+0.065041681 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:46:47 compute-0 nova_compute[189381]: 2025-11-25 10:46:47.841 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:48 compute-0 nova_compute[189381]: 2025-11-25 10:46:48.140 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:50 compute-0 podman[245523]: 2025-11-25 10:46:50.961289018 +0000 UTC m=+0.066360239 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:46:50 compute-0 podman[245522]: 2025-11-25 10:46:50.96136492 +0000 UTC m=+0.070295492 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible)
Nov 25 10:46:51 compute-0 ovn_controller[97779]: 2025-11-25T10:46:51Z|00053|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Nov 25 10:46:52 compute-0 nova_compute[189381]: 2025-11-25 10:46:52.845 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:53 compute-0 nova_compute[189381]: 2025-11-25 10:46:53.142 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:53 compute-0 podman[245564]: 2025-11-25 10:46:53.981879491 +0000 UTC m=+0.092540761 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 10:46:56 compute-0 podman[245590]: 2025-11-25 10:46:56.955013218 +0000 UTC m=+0.068753667 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 10:46:57 compute-0 nova_compute[189381]: 2025-11-25 10:46:57.847 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:58 compute-0 nova_compute[189381]: 2025-11-25 10:46:58.143 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:46:59 compute-0 podman[203557]: time="2025-11-25T10:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:46:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:46:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 25 10:47:01 compute-0 openstack_network_exporter[205722]: ERROR   10:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:47:01 compute-0 openstack_network_exporter[205722]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:47:01 compute-0 openstack_network_exporter[205722]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:47:01 compute-0 openstack_network_exporter[205722]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:47:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:47:01 compute-0 openstack_network_exporter[205722]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:47:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:47:01 compute-0 anacron[30897]: Job `cron.weekly' started
Nov 25 10:47:01 compute-0 anacron[30897]: Job `cron.weekly' terminated
Nov 25 10:47:02 compute-0 nova_compute[189381]: 2025-11-25 10:47:02.848 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:02 compute-0 podman[245611]: 2025-11-25 10:47:02.949456536 +0000 UTC m=+0.067582354 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:47:03 compute-0 nova_compute[189381]: 2025-11-25 10:47:03.145 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.333 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.333 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081ad550>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.340 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '613e6b77-82b6-426c-90b1-38d6776feb1f', 'name': 'vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.343 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'name': 'vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.346 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:47:03.347705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.352 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.356 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.360 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.361 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.361 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.362 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:47:03.361693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.362 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.363 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:47:03.363537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.385 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/memory.usage volume: 48.984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.405 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.432 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.433 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.433 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.433 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.434 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:47:03.433742) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.434 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.435 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:47:03.435480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.435 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.436 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.436 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.436 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.437 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.437 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:47:03.436990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.437 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/cpu volume: 37810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:47:03.438483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.438 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/cpu volume: 35690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.439 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 46590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.439 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.440 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.440 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:47:03.439920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.440 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:47:03.441674) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.462 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.462 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.463 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.481 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.482 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.484 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.504 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.505 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.505 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.506 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:47:03.506236) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.564 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.565 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.565 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.641 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.641 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.641 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.712 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.713 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.713 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.714 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.715 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 625402940 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.715 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 104257328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.715 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.latency volume: 84305615 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.715 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 567192189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.715 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 97341337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.716 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 75612085 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.716 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.716 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.716 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:47:03.714891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.720 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.720 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:47:03.720179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.721 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.721 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.721 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.721 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.722 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.722 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.722 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.722 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.723 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.724 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.724 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.724 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.724 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.724 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.724 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.725 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 41783296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.726 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.726 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.726 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.726 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.726 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.727 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.727 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.727 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:47:03.723502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:47:03.725845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.729 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 1614620919 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.729 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 10993280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:47:03.729304) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.730 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.730 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 1590671507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.730 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 14157667 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.730 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.730 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.730 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.731 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.732 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.732 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:47:03.731856) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:47:03.733406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.733 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.734 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.734 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.734 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.734 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.734 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.735 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.735 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.736 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.736 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:47:03.735978) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.736 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.737 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.738 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.738 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:47:03.737657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.738 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.738 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.739 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.739 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.739 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.739 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.741 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.742 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.742 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.742 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:47:03.740876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:47:03.741962) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.744 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.744 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.745 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.745 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.745 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:47:03.743872) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:47:03.744923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 DEBUG ceilometer.compute.pollsters [-] 613e6b77-82b6-426c-90b1-38d6776feb1f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.746 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.747 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:47:03.746523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.747 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.748 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.749 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:47:03.750 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:47:07 compute-0 nova_compute[189381]: 2025-11-25 10:47:07.851 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:08 compute-0 nova_compute[189381]: 2025-11-25 10:47:08.147 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:08 compute-0 podman[245633]: 2025-11-25 10:47:08.97242729 +0000 UTC m=+0.089680730 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:47:08 compute-0 podman[245634]: 2025-11-25 10:47:08.989529221 +0000 UTC m=+0.103796945 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:47:11 compute-0 nova_compute[189381]: 2025-11-25 10:47:11.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:11 compute-0 podman[245669]: 2025-11-25 10:47:11.98785619 +0000 UTC m=+0.099818731 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.048 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.165 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.226 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.226 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.287 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.290 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.359 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.361 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.421 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.429 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.489 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.490 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.553 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.555 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.630 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.631 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.689 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.696 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.759 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.760 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.841 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.843 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.858 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.907 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.908 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:47:12 compute-0 nova_compute[189381]: 2025-11-25 10:47:12.975 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.149 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.342 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.344 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4795MB free_disk=72.16117477416992GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.344 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.344 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.465 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.466 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.467 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.468 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.468 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.578 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.595 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.614 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:47:13 compute-0 nova_compute[189381]: 2025-11-25 10:47:13.615 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:47:15 compute-0 nova_compute[189381]: 2025-11-25 10:47:15.616 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:17 compute-0 nova_compute[189381]: 2025-11-25 10:47:17.856 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:17 compute-0 podman[245726]: 2025-11-25 10:47:17.955022679 +0000 UTC m=+0.068842940 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:47:18 compute-0 nova_compute[189381]: 2025-11-25 10:47:18.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:18 compute-0 nova_compute[189381]: 2025-11-25 10:47:18.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:47:18 compute-0 nova_compute[189381]: 2025-11-25 10:47:18.152 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:18 compute-0 nova_compute[189381]: 2025-11-25 10:47:18.256 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:47:18 compute-0 nova_compute[189381]: 2025-11-25 10:47:18.256 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:47:18 compute-0 nova_compute[189381]: 2025-11-25 10:47:18.257 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:47:20 compute-0 nova_compute[189381]: 2025-11-25 10:47:20.280 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updating instance_info_cache with network_info: [{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:47:20 compute-0 nova_compute[189381]: 2025-11-25 10:47:20.302 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:47:20 compute-0 nova_compute[189381]: 2025-11-25 10:47:20.303 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:47:20 compute-0 nova_compute[189381]: 2025-11-25 10:47:20.304 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:20 compute-0 nova_compute[189381]: 2025-11-25 10:47:20.304 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:20 compute-0 nova_compute[189381]: 2025-11-25 10:47:20.304 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:47:21 compute-0 nova_compute[189381]: 2025-11-25 10:47:21.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:21 compute-0 nova_compute[189381]: 2025-11-25 10:47:21.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:21 compute-0 podman[245744]: 2025-11-25 10:47:21.955828868 +0000 UTC m=+0.066743630 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm)
Nov 25 10:47:21 compute-0 podman[245745]: 2025-11-25 10:47:21.959000449 +0000 UTC m=+0.066901784 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:47:22 compute-0 nova_compute[189381]: 2025-11-25 10:47:22.858 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:23 compute-0 nova_compute[189381]: 2025-11-25 10:47:23.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:47:23 compute-0 nova_compute[189381]: 2025-11-25 10:47:23.154 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:24 compute-0 podman[245786]: 2025-11-25 10:47:24.998209595 +0000 UTC m=+0.111554528 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 10:47:27 compute-0 nova_compute[189381]: 2025-11-25 10:47:27.860 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:27 compute-0 podman[245812]: 2025-11-25 10:47:27.950527142 +0000 UTC m=+0.061634973 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:47:28 compute-0 nova_compute[189381]: 2025-11-25 10:47:28.156 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:29 compute-0 sshd-session[245832]: Connection closed by authenticating user root 171.244.51.45 port 44060 [preauth]
Nov 25 10:47:29 compute-0 podman[203557]: time="2025-11-25T10:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:47:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:47:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 25 10:47:31 compute-0 openstack_network_exporter[205722]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:47:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:47:31 compute-0 openstack_network_exporter[205722]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:47:31 compute-0 openstack_network_exporter[205722]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:47:31 compute-0 openstack_network_exporter[205722]: ERROR   10:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:47:31 compute-0 openstack_network_exporter[205722]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:47:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:47:32 compute-0 nova_compute[189381]: 2025-11-25 10:47:32.863 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:33 compute-0 nova_compute[189381]: 2025-11-25 10:47:33.159 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:33 compute-0 podman[245834]: 2025-11-25 10:47:33.938954267 +0000 UTC m=+0.058352979 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:47:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:47:36.049 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:47:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:47:36.050 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:47:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:47:36.050 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:47:37 compute-0 nova_compute[189381]: 2025-11-25 10:47:37.865 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:38 compute-0 nova_compute[189381]: 2025-11-25 10:47:38.162 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:39 compute-0 podman[245857]: 2025-11-25 10:47:39.966034368 +0000 UTC m=+0.065873805 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:47:39 compute-0 podman[245856]: 2025-11-25 10:47:39.966192923 +0000 UTC m=+0.068288075 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.license=GPLv2)
Nov 25 10:47:42 compute-0 nova_compute[189381]: 2025-11-25 10:47:42.867 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:42 compute-0 podman[245895]: 2025-11-25 10:47:42.985063281 +0000 UTC m=+0.091135761 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, name=ubi9, release=1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 25 10:47:43 compute-0 nova_compute[189381]: 2025-11-25 10:47:43.164 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:47 compute-0 nova_compute[189381]: 2025-11-25 10:47:47.870 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:48 compute-0 nova_compute[189381]: 2025-11-25 10:47:48.167 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:48 compute-0 podman[245913]: 2025-11-25 10:47:48.969672253 +0000 UTC m=+0.088412642 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 10:47:52 compute-0 nova_compute[189381]: 2025-11-25 10:47:52.873 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:52 compute-0 podman[245934]: 2025-11-25 10:47:52.948769508 +0000 UTC m=+0.064239938 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:47:52 compute-0 podman[245933]: 2025-11-25 10:47:52.952954898 +0000 UTC m=+0.073977668 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Nov 25 10:47:53 compute-0 nova_compute[189381]: 2025-11-25 10:47:53.169 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:55 compute-0 podman[245975]: 2025-11-25 10:47:55.985306425 +0000 UTC m=+0.105021840 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 25 10:47:57 compute-0 nova_compute[189381]: 2025-11-25 10:47:57.876 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:58 compute-0 nova_compute[189381]: 2025-11-25 10:47:58.172 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:47:58 compute-0 podman[246001]: 2025-11-25 10:47:58.978629703 +0000 UTC m=+0.089915096 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 10:47:59 compute-0 podman[203557]: time="2025-11-25T10:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:47:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:47:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 25 10:48:01 compute-0 openstack_network_exporter[205722]: ERROR   10:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:48:01 compute-0 openstack_network_exporter[205722]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:48:01 compute-0 openstack_network_exporter[205722]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:48:01 compute-0 openstack_network_exporter[205722]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:48:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:48:01 compute-0 openstack_network_exporter[205722]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:48:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:48:02 compute-0 nova_compute[189381]: 2025-11-25 10:48:02.879 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:03 compute-0 nova_compute[189381]: 2025-11-25 10:48:03.174 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:04 compute-0 podman[246021]: 2025-11-25 10:48:04.94832101 +0000 UTC m=+0.056088863 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:48:07 compute-0 nova_compute[189381]: 2025-11-25 10:48:07.881 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:08 compute-0 nova_compute[189381]: 2025-11-25 10:48:08.177 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:10 compute-0 podman[246043]: 2025-11-25 10:48:10.961161767 +0000 UTC m=+0.070188969 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Nov 25 10:48:10 compute-0 podman[246044]: 2025-11-25 10:48:10.963588287 +0000 UTC m=+0.070484517 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:48:11 compute-0 nova_compute[189381]: 2025-11-25 10:48:11.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:12 compute-0 nova_compute[189381]: 2025-11-25 10:48:12.883 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:13 compute-0 nova_compute[189381]: 2025-11-25 10:48:13.179 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:14 compute-0 podman[246081]: 2025-11-25 10:48:14.008353531 +0000 UTC m=+0.106036750 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release-0.7.12=, config_id=edpm, com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler)
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.047 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.048 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.135 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.206 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.207 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.262 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.263 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.325 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.326 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.402 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.410 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.468 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.469 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.528 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.529 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.592 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.593 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.661 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.669 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.732 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.733 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.798 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.799 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.859 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.860 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:48:14 compute-0 nova_compute[189381]: 2025-11-25 10:48:14.920 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.292 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.293 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4773MB free_disk=72.1612319946289GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.293 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.294 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.373 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.373 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 613e6b77-82b6-426c-90b1-38d6776feb1f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.374 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.374 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.374 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.396 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.421 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.421 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.436 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.473 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.579 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.603 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.605 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:48:15 compute-0 nova_compute[189381]: 2025-11-25 10:48:15.605 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:17 compute-0 nova_compute[189381]: 2025-11-25 10:48:17.886 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:18 compute-0 nova_compute[189381]: 2025-11-25 10:48:18.182 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:19 compute-0 nova_compute[189381]: 2025-11-25 10:48:19.606 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:19 compute-0 nova_compute[189381]: 2025-11-25 10:48:19.607 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:48:19 compute-0 nova_compute[189381]: 2025-11-25 10:48:19.607 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:48:19 compute-0 podman[246138]: 2025-11-25 10:48:19.957067789 +0000 UTC m=+0.064407592 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 10:48:22 compute-0 nova_compute[189381]: 2025-11-25 10:48:22.385 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:48:22 compute-0 nova_compute[189381]: 2025-11-25 10:48:22.386 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:48:22 compute-0 nova_compute[189381]: 2025-11-25 10:48:22.387 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:48:22 compute-0 nova_compute[189381]: 2025-11-25 10:48:22.387 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:48:22 compute-0 nova_compute[189381]: 2025-11-25 10:48:22.888 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:23 compute-0 nova_compute[189381]: 2025-11-25 10:48:23.184 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:23 compute-0 podman[246159]: 2025-11-25 10:48:23.967528606 +0000 UTC m=+0.069498089 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:48:24 compute-0 podman[246158]: 2025-11-25 10:48:24.001449551 +0000 UTC m=+0.105641398 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6)
Nov 25 10:48:27 compute-0 podman[246203]: 2025-11-25 10:48:27.001755656 +0000 UTC m=+0.110449136 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.890 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.912 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.926 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.927 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.927 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.928 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.928 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.928 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:27 compute-0 nova_compute[189381]: 2025-11-25 10:48:27.929 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:48:28 compute-0 nova_compute[189381]: 2025-11-25 10:48:28.186 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:28 compute-0 nova_compute[189381]: 2025-11-25 10:48:28.337 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:28 compute-0 nova_compute[189381]: 2025-11-25 10:48:28.338 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:48:29 compute-0 podman[203557]: time="2025-11-25T10:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:48:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:48:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 25 10:48:29 compute-0 podman[246229]: 2025-11-25 10:48:29.955903015 +0000 UTC m=+0.070271481 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:48:31 compute-0 openstack_network_exporter[205722]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:48:31 compute-0 openstack_network_exporter[205722]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:48:31 compute-0 openstack_network_exporter[205722]: ERROR   10:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:48:31 compute-0 openstack_network_exporter[205722]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:48:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:48:31 compute-0 openstack_network_exporter[205722]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:48:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:48:32 compute-0 nova_compute[189381]: 2025-11-25 10:48:32.892 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:33 compute-0 nova_compute[189381]: 2025-11-25 10:48:33.189 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.700 189385 DEBUG nova.compute.manager [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received event network-changed-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.701 189385 DEBUG nova.compute.manager [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Refreshing instance network info cache due to event network-changed-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.701 189385 DEBUG oslo_concurrency.lockutils [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.702 189385 DEBUG oslo_concurrency.lockutils [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.702 189385 DEBUG nova.network.neutron [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Refreshing network info cache for port 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.756 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.757 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.757 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.758 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.758 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.760 189385 INFO nova.compute.manager [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Terminating instance
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.761 189385 DEBUG nova.compute.manager [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 10:48:34 compute-0 kernel: tap4aa1b3c5-4e (unregistering): left promiscuous mode
Nov 25 10:48:34 compute-0 NetworkManager[56317]: <info>  [1764067714.7983] device (tap4aa1b3c5-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 10:48:34 compute-0 ovn_controller[97779]: 2025-11-25T10:48:34Z|00054|binding|INFO|Releasing lport 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 from this chassis (sb_readonly=0)
Nov 25 10:48:34 compute-0 ovn_controller[97779]: 2025-11-25T10:48:34Z|00055|binding|INFO|Setting lport 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 down in Southbound
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.809 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:34 compute-0 ovn_controller[97779]: 2025-11-25T10:48:34Z|00056|binding|INFO|Removing iface tap4aa1b3c5-4e ovn-installed in OVS
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.814 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.821 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fa:5f:ba 192.168.0.183'], port_security=['fa:16:3e:fa:5f:ba 192.168.0.183'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-6oeui4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-port-tknjl3ychzd2', 'neutron:cidrs': '192.168.0.183/24', 'neutron:device_id': '613e6b77-82b6-426c-90b1-38d6776feb1f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-6oeui4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-port-tknjl3ychzd2', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.823 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e unbound from our chassis
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.824 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.826 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.846 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd5da5b-f726-49c6-8150-2d98a0a53294]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:48:34 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 25 10:48:34 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 27.829s CPU time.
Nov 25 10:48:34 compute-0 systemd-machined[155706]: Machine qemu-3-instance-00000003 terminated.
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.894 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c672bb-25c7-42df-bbd5-03aace6e2f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.899 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7218a4-b1d6-461a-8db5-f7ddba8a6785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.935 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[6d01e244-037e-4c57-a62a-d3f4e3fcc3e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.954 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1a57ba-972c-42a0-bfa1-4160d4224201]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 36927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246263, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.982 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf0aa47-b038-44fa-8fe7-b0b82f80ef0d]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369763, 'tstamp': 369763}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246264, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369766, 'tstamp': 369766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246264, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.984 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.986 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:34 compute-0 nova_compute[189381]: 2025-11-25 10:48:34.992 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.993 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35870011-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.995 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.996 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35870011-20, col_values=(('external_ids', {'iface-id': '20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:48:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:34.997 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.049 189385 INFO nova.virt.libvirt.driver [-] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Instance destroyed successfully.
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.049 189385 DEBUG nova.objects.instance [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'resources' on Instance uuid 613e6b77-82b6-426c-90b1-38d6776feb1f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.065 189385 DEBUG nova.virt.libvirt.vif [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T10:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-6uv7hhxrjxgw-pboqvxbbkmxu-vnf-dwgcgxsm5ruj',id=3,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T10:40:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-lse7ova1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T10:40:31Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODg0MjI1ODAzNzA1MzM1NzY2NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 25 10:48:35 compute-0 nova_compute[189381]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODg0MjI1ODAzNzA1MzM1NzY2NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4NDIyNTgwMzcwNTMzNTc2NjU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODQyMjU4MDM3MDUzMzU3NjY1PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=613e6b77-82b6-426c-90b1-38d6776feb1f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.065 189385 DEBUG nova.network.os_vif_util [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.066 189385 DEBUG nova.network.os_vif_util [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fa:5f:ba,bridge_name='br-int',has_traffic_filtering=True,id=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4aa1b3c5-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.066 189385 DEBUG os_vif [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:5f:ba,bridge_name='br-int',has_traffic_filtering=True,id=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4aa1b3c5-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.068 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.068 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4aa1b3c5-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.070 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.073 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.076 189385 INFO os_vif [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fa:5f:ba,bridge_name='br-int',has_traffic_filtering=True,id=4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4aa1b3c5-4e')
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.077 189385 INFO nova.virt.libvirt.driver [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Deleting instance files /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f_del
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.078 189385 INFO nova.virt.libvirt.driver [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Deletion of /var/lib/nova/instances/613e6b77-82b6-426c-90b1-38d6776feb1f_del complete
Nov 25 10:48:35 compute-0 podman[246282]: 2025-11-25 10:48:35.117864755 +0000 UTC m=+0.070332538 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.156 189385 INFO nova.compute.manager [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.157 189385 DEBUG oslo.service.loopingcall [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.157 189385 DEBUG nova.compute.manager [-] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.157 189385 DEBUG nova.network.neutron [-] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 10:48:35 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:48:35.065 189385 DEBUG nova.virt.libvirt.vif [None req-51de72fa-31 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:48:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:35.732 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:48:35 compute-0 nova_compute[189381]: 2025-11-25 10:48:35.733 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:35.734 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:48:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:36.050 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:36.051 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:36.052 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.606 189385 DEBUG nova.network.neutron [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updated VIF entry in instance network info cache for port 4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.606 189385 DEBUG nova.network.neutron [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updating instance_info_cache with network_info: [{"id": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "address": "fa:16:3e:fa:5f:ba", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.183", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4aa1b3c5-4e", "ovs_interfaceid": "4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.623 189385 DEBUG oslo_concurrency.lockutils [req-a157484c-1d1f-453a-968b-b05a03164ce7 req-6080775f-f6cb-4d04-a17a-aa9dda086f4c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-613e6b77-82b6-426c-90b1-38d6776feb1f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.699 189385 DEBUG nova.network.neutron [-] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.718 189385 INFO nova.compute.manager [-] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Took 1.56 seconds to deallocate network for instance.
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.774 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.775 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.822 189385 DEBUG nova.compute.manager [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received event network-vif-unplugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.823 189385 DEBUG oslo_concurrency.lockutils [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.823 189385 DEBUG oslo_concurrency.lockutils [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.824 189385 DEBUG oslo_concurrency.lockutils [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.824 189385 DEBUG nova.compute.manager [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] No waiting events found dispatching network-vif-unplugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.825 189385 WARNING nova.compute.manager [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received unexpected event network-vif-unplugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 for instance with vm_state deleted and task_state None.
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.825 189385 DEBUG nova.compute.manager [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received event network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.825 189385 DEBUG oslo_concurrency.lockutils [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.826 189385 DEBUG oslo_concurrency.lockutils [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.826 189385 DEBUG oslo_concurrency.lockutils [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.826 189385 DEBUG nova.compute.manager [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] No waiting events found dispatching network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.827 189385 WARNING nova.compute.manager [req-2f92471a-53ca-49ed-a5cc-6935c0b1a2ba req-ef700acd-21b4-406b-b694-ed5eff5085f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Received unexpected event network-vif-plugged-4aa1b3c5-4eb2-4d32-8c8d-866b842d2ec3 for instance with vm_state deleted and task_state None.
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.880 189385 DEBUG nova.compute.provider_tree [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.893 189385 DEBUG nova.scheduler.client.report [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.911 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:36 compute-0 nova_compute[189381]: 2025-11-25 10:48:36.953 189385 INFO nova.scheduler.client.report [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Deleted allocations for instance 613e6b77-82b6-426c-90b1-38d6776feb1f
Nov 25 10:48:37 compute-0 nova_compute[189381]: 2025-11-25 10:48:37.057 189385 DEBUG oslo_concurrency.lockutils [None req-51de72fa-31a5-4868-906e-ad4a193bc847 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "613e6b77-82b6-426c-90b1-38d6776feb1f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:48:37 compute-0 nova_compute[189381]: 2025-11-25 10:48:37.895 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:40 compute-0 nova_compute[189381]: 2025-11-25 10:48:40.071 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:48:41.737 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:48:41 compute-0 podman[246312]: 2025-11-25 10:48:41.973511952 +0000 UTC m=+0.079212052 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 10:48:41 compute-0 podman[246311]: 2025-11-25 10:48:41.986374659 +0000 UTC m=+0.095104446 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 10:48:42 compute-0 nova_compute[189381]: 2025-11-25 10:48:42.898 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:44 compute-0 podman[246351]: 2025-11-25 10:48:44.774734218 +0000 UTC m=+0.099842621 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, io.openshift.tags=base rhel9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:48:45 compute-0 nova_compute[189381]: 2025-11-25 10:48:45.074 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:47 compute-0 nova_compute[189381]: 2025-11-25 10:48:47.900 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:50 compute-0 nova_compute[189381]: 2025-11-25 10:48:50.046 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764067715.0447083, 613e6b77-82b6-426c-90b1-38d6776feb1f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:48:50 compute-0 nova_compute[189381]: 2025-11-25 10:48:50.047 189385 INFO nova.compute.manager [-] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] VM Stopped (Lifecycle Event)
Nov 25 10:48:50 compute-0 nova_compute[189381]: 2025-11-25 10:48:50.065 189385 DEBUG nova.compute.manager [None req-dd59f8e0-d528-4161-8dbc-5b0585f1a307 - - - - - -] [instance: 613e6b77-82b6-426c-90b1-38d6776feb1f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:48:50 compute-0 nova_compute[189381]: 2025-11-25 10:48:50.076 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:50 compute-0 podman[246372]: 2025-11-25 10:48:50.973997693 +0000 UTC m=+0.090708320 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent)
Nov 25 10:48:52 compute-0 nova_compute[189381]: 2025-11-25 10:48:52.902 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:54 compute-0 podman[246390]: 2025-11-25 10:48:54.944056132 +0000 UTC m=+0.062966879 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:48:54 compute-0 podman[246391]: 2025-11-25 10:48:54.954318915 +0000 UTC m=+0.063991858 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:48:55 compute-0 nova_compute[189381]: 2025-11-25 10:48:55.078 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:57 compute-0 nova_compute[189381]: 2025-11-25 10:48:57.905 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:48:58 compute-0 podman[246434]: 2025-11-25 10:48:58.019262497 +0000 UTC m=+0.120429908 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 10:48:59 compute-0 podman[203557]: time="2025-11-25T10:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:48:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:48:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 25 10:49:00 compute-0 nova_compute[189381]: 2025-11-25 10:49:00.080 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:00 compute-0 podman[246457]: 2025-11-25 10:49:00.974734027 +0000 UTC m=+0.093871310 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:49:01 compute-0 openstack_network_exporter[205722]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:49:01 compute-0 openstack_network_exporter[205722]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:49:01 compute-0 openstack_network_exporter[205722]: ERROR   10:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:49:01 compute-0 openstack_network_exporter[205722]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:49:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:49:01 compute-0 openstack_network_exporter[205722]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:49:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:49:02 compute-0 nova_compute[189381]: 2025-11-25 10:49:02.915 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.334 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.335 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2408106960>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.343 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'name': 'vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.346 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:49:03.347406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.352 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.358 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.359 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.360 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.360 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:49:03.360209) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.361 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.362 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:49:03.362850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.387 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.409 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.410 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.411 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.412 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.412 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.413 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:49:03.410787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:49:03.412110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.414 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:49:03.413718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.414 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.414 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.415 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/cpu volume: 36850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.415 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 47800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:49:03.415082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:49:03.416331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.416 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.417 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.417 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.418 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:49:03.417549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.438 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.438 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.439 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.459 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.460 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.460 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.461 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:49:03.461497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.517 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.517 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.518 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.574 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.575 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.575 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.576 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.576 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.576 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 567192189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.577 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 97341337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.577 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 75612085 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.577 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.577 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.578 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.578 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.579 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:49:03.576646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:49:03.579423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.579 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.580 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.580 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.580 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.580 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.582 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:49:03.581910) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.582 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.582 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.582 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.583 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.583 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.584 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.584 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.585 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.585 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.585 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.586 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.586 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:49:03.584369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.586 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.587 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 1590671507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:49:03.587185) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.587 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 14157667 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.587 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.588 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.588 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.588 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:49:03.589495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.590 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.590 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.591 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.591 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.591 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.591 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.592 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:49:03.590974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.592 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:49:03.593405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.593 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.595 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:49:03.594859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.595 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.595 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.595 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.596 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.596 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.596 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:49:03.597488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.598 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.599 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.599 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:49:03.598654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:49:03.600055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.600 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.601 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:49:03.601165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.601 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.602 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.603 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:49:03.602552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:49:03.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:49:05 compute-0 nova_compute[189381]: 2025-11-25 10:49:05.025 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:05 compute-0 nova_compute[189381]: 2025-11-25 10:49:05.026 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:49:05 compute-0 nova_compute[189381]: 2025-11-25 10:49:05.083 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:05 compute-0 sshd-session[246478]: Accepted publickey for zuul from 38.102.83.176 port 40344 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 10:49:05 compute-0 systemd-logind[822]: New session 30 of user zuul.
Nov 25 10:49:05 compute-0 systemd[1]: Started Session 30 of User zuul.
Nov 25 10:49:05 compute-0 sshd-session[246478]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:49:05 compute-0 podman[246480]: 2025-11-25 10:49:05.552748668 +0000 UTC m=+0.065944594 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:49:06 compute-0 sudo[246680]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vopuzchuwzuymwvxvnxlozyayprvryut ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764067745.6277542-59783-61720901982780/AnsiballZ_command.py'
Nov 25 10:49:06 compute-0 sudo[246680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:49:06 compute-0 python3[246682]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:49:06 compute-0 sudo[246680]: pam_unix(sudo:session): session closed for user root
Nov 25 10:49:08 compute-0 ovn_controller[97779]: 2025-11-25T10:49:08Z|00057|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Nov 25 10:49:08 compute-0 nova_compute[189381]: 2025-11-25 10:49:08.033 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:10 compute-0 nova_compute[189381]: 2025-11-25 10:49:10.042 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:10 compute-0 nova_compute[189381]: 2025-11-25 10:49:10.042 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:49:10 compute-0 nova_compute[189381]: 2025-11-25 10:49:10.062 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:49:10 compute-0 nova_compute[189381]: 2025-11-25 10:49:10.086 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:11 compute-0 nova_compute[189381]: 2025-11-25 10:49:11.042 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:12 compute-0 podman[246719]: 2025-11-25 10:49:12.976828187 +0000 UTC m=+0.087614152 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:49:12 compute-0 podman[246720]: 2025-11-25 10:49:12.98289564 +0000 UTC m=+0.096646490 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm)
Nov 25 10:49:13 compute-0 nova_compute[189381]: 2025-11-25 10:49:13.036 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.051 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.051 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.154 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.216 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.217 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.277 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.279 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.351 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.352 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.424 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.433 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.510 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.511 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.572 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.573 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.644 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.645 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:14 compute-0 nova_compute[189381]: 2025-11-25 10:49:14.706 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:14 compute-0 podman[246784]: 2025-11-25 10:49:14.968903386 +0000 UTC m=+0.084376079 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm)
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.036 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.037 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4949MB free_disk=72.18324661254883GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.037 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.038 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.088 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.172 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.172 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.173 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.173 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.365 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.377 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.397 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:49:15 compute-0 nova_compute[189381]: 2025-11-25 10:49:15.397 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:17 compute-0 nova_compute[189381]: 2025-11-25 10:49:17.397 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:18 compute-0 nova_compute[189381]: 2025-11-25 10:49:18.038 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:20 compute-0 nova_compute[189381]: 2025-11-25 10:49:20.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:20 compute-0 nova_compute[189381]: 2025-11-25 10:49:20.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:20 compute-0 nova_compute[189381]: 2025-11-25 10:49:20.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:49:20 compute-0 nova_compute[189381]: 2025-11-25 10:49:20.091 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:20 compute-0 nova_compute[189381]: 2025-11-25 10:49:20.664 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:49:20 compute-0 nova_compute[189381]: 2025-11-25 10:49:20.665 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:49:20 compute-0 nova_compute[189381]: 2025-11-25 10:49:20.665 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:49:21 compute-0 podman[246804]: 2025-11-25 10:49:21.94941557 +0000 UTC m=+0.058507881 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.043 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.457 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "2386416a-4434-4f8a-836b-0c58a5808f62" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.458 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2386416a-4434-4f8a-836b-0c58a5808f62" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.533 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updating instance_info_cache with network_info: [{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.590 189385 DEBUG nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.598 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.599 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.600 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.601 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.602 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.603 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.742 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.743 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.754 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.755 189385 INFO nova.compute.claims [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Claim successful on node compute-0.ctlplane.example.com
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.930 189385 DEBUG nova.compute.provider_tree [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.943 189385 DEBUG nova.scheduler.client.report [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.965 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:23 compute-0 nova_compute[189381]: 2025-11-25 10:49:23.966 189385 DEBUG nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.033 189385 DEBUG nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.061 189385 INFO nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.132 189385 DEBUG nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.353 189385 DEBUG nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.354 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.355 189385 INFO nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Creating image(s)
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.355 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.355 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.356 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.356 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "2f0b4681cd51b11d0e715ed9a7bc9065a87be20c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:24 compute-0 nova_compute[189381]: 2025-11-25 10:49:24.356 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2f0b4681cd51b11d0e715ed9a7bc9065a87be20c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:25 compute-0 nova_compute[189381]: 2025-11-25 10:49:25.093 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:25 compute-0 podman[246824]: 2025-11-25 10:49:25.963025502 +0000 UTC m=+0.069024261 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 10:49:25 compute-0 podman[246823]: 2025-11-25 10:49:25.976380323 +0000 UTC m=+0.076875435 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, name=ubi9-minimal, vcs-type=git, version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 10:49:26 compute-0 nova_compute[189381]: 2025-11-25 10:49:26.682 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:26 compute-0 nova_compute[189381]: 2025-11-25 10:49:26.947 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:27 compute-0 nova_compute[189381]: 2025-11-25 10:49:27.009 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.part --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:27 compute-0 nova_compute[189381]: 2025-11-25 10:49:27.011 189385 DEBUG nova.virt.images [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] ad85c17a-1157-4895-a348-6bee96013273 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 10:49:27 compute-0 nova_compute[189381]: 2025-11-25 10:49:27.063 189385 DEBUG nova.privsep.utils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 10:49:27 compute-0 nova_compute[189381]: 2025-11-25 10:49:27.064 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.part /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:28 compute-0 nova_compute[189381]: 2025-11-25 10:49:28.042 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:29 compute-0 podman[246879]: 2025-11-25 10:49:29.00503171 +0000 UTC m=+0.122168919 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.484 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.part /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.converted" returned: 0 in 2.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.489 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.582 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c.converted --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.584 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2f0b4681cd51b11d0e715ed9a7bc9065a87be20c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.598 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.673 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.674 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "2f0b4681cd51b11d0e715ed9a7bc9065a87be20c" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.675 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2f0b4681cd51b11d0e715ed9a7bc9065a87be20c" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.686 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:29 compute-0 podman[203557]: time="2025-11-25T10:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:49:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:49:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.764 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:29 compute-0 nova_compute[189381]: 2025-11-25 10:49:29.765 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c,backing_fmt=raw /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.095 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.168 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c,backing_fmt=raw /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk 1073741824" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.169 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2f0b4681cd51b11d0e715ed9a7bc9065a87be20c" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.170 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.226 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.228 189385 DEBUG nova.virt.disk.api [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Checking if we can resize image /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.228 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.285 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.287 189385 DEBUG nova.virt.disk.api [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Cannot resize image /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.287 189385 DEBUG nova.objects.instance [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'migration_context' on Instance uuid 2386416a-4434-4f8a-836b-0c58a5808f62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.305 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.306 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.307 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.326 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.385 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.386 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.387 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.398 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.458 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.460 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.747 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.eph0 1073741824" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.748 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.749 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.812 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.815 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.816 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Ensure instance console log exists: /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.827 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.828 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.828 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.841 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:49:10Z,direct_url=<?>,disk_format='qcow2',id=ad85c17a-1157-4895-a348-6bee96013273,min_disk=0,min_ram=0,name='fvt_testing_image',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:49:15Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'ad85c17a-1157-4895-a348-6bee96013273'}], 'ephemerals': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vdb', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 1, 'device_type': 'disk', 'encrypted': False, 'encryption_options': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.850 189385 WARNING nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.857 189385 DEBUG nova.virt.libvirt.host [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.857 189385 DEBUG nova.virt.libvirt.host [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.863 189385 DEBUG nova.virt.libvirt.host [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.864 189385 DEBUG nova.virt.libvirt.host [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.864 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.865 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:49:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='4ec5c8ea-8824-41ae-b36e-2dc837c0f90d',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-25T10:49:10Z,direct_url=<?>,disk_format='qcow2',id=ad85c17a-1157-4895-a348-6bee96013273,min_disk=0,min_ram=0,name='fvt_testing_image',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-25T10:49:15Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.866 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.866 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.866 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.867 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.867 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.867 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.868 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.868 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.869 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.869 189385 DEBUG nova.virt.hardware [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.875 189385 DEBUG nova.objects.instance [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'pci_devices' on Instance uuid 2386416a-4434-4f8a-836b-0c58a5808f62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:49:30 compute-0 nova_compute[189381]: 2025-11-25 10:49:30.895 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] End _get_guest_xml xml=<domain type="kvm">
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <uuid>2386416a-4434-4f8a-836b-0c58a5808f62</uuid>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <name>instance-00000005</name>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <memory>524288</memory>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <metadata>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <nova:name>fvt_testing_server</nova:name>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 10:49:30</nova:creationTime>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <nova:flavor name="fvt_testing_flavor">
Nov 25 10:49:30 compute-0 nova_compute[189381]:         <nova:memory>512</nova:memory>
Nov 25 10:49:30 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 10:49:30 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 10:49:30 compute-0 nova_compute[189381]:         <nova:ephemeral>1</nova:ephemeral>
Nov 25 10:49:30 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 10:49:30 compute-0 nova_compute[189381]:         <nova:user uuid="af7a147d86064a21a94066f72173bba2">admin</nova:user>
Nov 25 10:49:30 compute-0 nova_compute[189381]:         <nova:project uuid="aef0c6ba1dd54218a527ced3f8d2a1be">admin</nova:project>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="ad85c17a-1157-4895-a348-6bee96013273"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <nova:ports/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   </metadata>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <system>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <entry name="serial">2386416a-4434-4f8a-836b-0c58a5808f62</entry>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <entry name="uuid">2386416a-4434-4f8a-836b-0c58a5808f62</entry>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </system>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <os>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   </os>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <features>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <apic/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   </features>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   </clock>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   </cpu>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   <devices>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.eph0"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <target dev="vdb" bus="virtio"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.config"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </disk>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/console.log" append="off"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </serial>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <video>
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </video>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </rng>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 10:49:30 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 10:49:30 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 10:49:30 compute-0 nova_compute[189381]:   </devices>
Nov 25 10:49:30 compute-0 nova_compute[189381]: </domain>
Nov 25 10:49:30 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.053 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.054 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.054 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.055 189385 INFO nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Using config drive
Nov 25 10:49:31 compute-0 openstack_network_exporter[205722]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:49:31 compute-0 openstack_network_exporter[205722]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:49:31 compute-0 openstack_network_exporter[205722]: ERROR   10:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:49:31 compute-0 openstack_network_exporter[205722]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:49:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:49:31 compute-0 openstack_network_exporter[205722]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:49:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.436 189385 INFO nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Creating config drive at /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.config
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.441 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppic8lvk8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:49:31 compute-0 nova_compute[189381]: 2025-11-25 10:49:31.567 189385 DEBUG oslo_concurrency.processutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppic8lvk8" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:49:31 compute-0 systemd-machined[155706]: New machine qemu-5-instance-00000005.
Nov 25 10:49:31 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 25 10:49:31 compute-0 podman[246945]: 2025-11-25 10:49:31.741997432 +0000 UTC m=+0.097975898 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.253 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067772.2522712, 2386416a-4434-4f8a-836b-0c58a5808f62 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.255 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] VM Resumed (Lifecycle Event)
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.259 189385 DEBUG nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.259 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.265 189385 INFO nova.virt.libvirt.driver [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Instance spawned successfully.
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.266 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.287 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.297 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.302 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.302 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.302 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.303 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.303 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.303 189385 DEBUG nova.virt.libvirt.driver [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.328 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.328 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764067772.2546744, 2386416a-4434-4f8a-836b-0c58a5808f62 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.329 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] VM Started (Lifecycle Event)
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.353 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.358 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.376 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.406 189385 INFO nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Took 8.05 seconds to spawn the instance on the hypervisor.
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.407 189385 DEBUG nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.461 189385 INFO nova.compute.manager [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Took 8.75 seconds to build instance.
Nov 25 10:49:32 compute-0 nova_compute[189381]: 2025-11-25 10:49:32.478 189385 DEBUG oslo_concurrency.lockutils [None req-e31477f8-0a71-4908-87ae-f12a44bc3b7b af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2386416a-4434-4f8a-836b-0c58a5808f62" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:33 compute-0 nova_compute[189381]: 2025-11-25 10:49:33.045 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:34 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 10:49:34 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 10:49:35 compute-0 nova_compute[189381]: 2025-11-25 10:49:35.098 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:35 compute-0 podman[247001]: 2025-11-25 10:49:35.955392445 +0000 UTC m=+0.062383031 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:49:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:49:36.051 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:49:36.052 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:49:36.052 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:38 compute-0 nova_compute[189381]: 2025-11-25 10:49:38.048 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:40 compute-0 nova_compute[189381]: 2025-11-25 10:49:40.101 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:43 compute-0 nova_compute[189381]: 2025-11-25 10:49:43.052 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:43 compute-0 podman[247024]: 2025-11-25 10:49:43.967078665 +0000 UTC m=+0.077232265 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 10:49:43 compute-0 podman[247025]: 2025-11-25 10:49:43.977587235 +0000 UTC m=+0.087641882 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.102 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.937 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "2386416a-4434-4f8a-836b-0c58a5808f62" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.937 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2386416a-4434-4f8a-836b-0c58a5808f62" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.938 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "2386416a-4434-4f8a-836b-0c58a5808f62-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.938 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2386416a-4434-4f8a-836b-0c58a5808f62-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.938 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2386416a-4434-4f8a-836b-0c58a5808f62-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.939 189385 INFO nova.compute.manager [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Terminating instance
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.940 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "refresh_cache-2386416a-4434-4f8a-836b-0c58a5808f62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.940 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquired lock "refresh_cache-2386416a-4434-4f8a-836b-0c58a5808f62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:49:45 compute-0 nova_compute[189381]: 2025-11-25 10:49:45.940 189385 DEBUG nova.network.neutron [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 10:49:45 compute-0 podman[247059]: 2025-11-25 10:49:45.957162578 +0000 UTC m=+0.071977896 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, vcs-type=git, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.096 189385 DEBUG nova.network.neutron [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.684 189385 DEBUG nova.network.neutron [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.698 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Releasing lock "refresh_cache-2386416a-4434-4f8a-836b-0c58a5808f62" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.699 189385 DEBUG nova.compute.manager [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 10:49:46 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 25 10:49:46 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 15.189s CPU time.
Nov 25 10:49:46 compute-0 systemd-machined[155706]: Machine qemu-5-instance-00000005 terminated.
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.957 189385 INFO nova.virt.libvirt.driver [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Instance destroyed successfully.
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.957 189385 DEBUG nova.objects.instance [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'resources' on Instance uuid 2386416a-4434-4f8a-836b-0c58a5808f62 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.968 189385 INFO nova.virt.libvirt.driver [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Deleting instance files /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62_del
Nov 25 10:49:46 compute-0 nova_compute[189381]: 2025-11-25 10:49:46.969 189385 INFO nova.virt.libvirt.driver [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Deletion of /var/lib/nova/instances/2386416a-4434-4f8a-836b-0c58a5808f62_del complete
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.089 189385 INFO nova.compute.manager [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Took 0.39 seconds to destroy the instance on the hypervisor.
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.089 189385 DEBUG oslo.service.loopingcall [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.090 189385 DEBUG nova.compute.manager [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.090 189385 DEBUG nova.network.neutron [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.203 189385 DEBUG nova.network.neutron [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.213 189385 DEBUG nova.network.neutron [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.224 189385 INFO nova.compute.manager [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Took 0.13 seconds to deallocate network for instance.
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.276 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.276 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.408 189385 DEBUG nova.compute.provider_tree [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.424 189385 DEBUG nova.scheduler.client.report [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.453 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.517 189385 INFO nova.scheduler.client.report [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Deleted allocations for instance 2386416a-4434-4f8a-836b-0c58a5808f62
Nov 25 10:49:47 compute-0 nova_compute[189381]: 2025-11-25 10:49:47.616 189385 DEBUG oslo_concurrency.lockutils [None req-98c10627-db1f-4d26-b087-8de93d0e1866 af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "2386416a-4434-4f8a-836b-0c58a5808f62" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:49:48 compute-0 nova_compute[189381]: 2025-11-25 10:49:48.056 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:50 compute-0 nova_compute[189381]: 2025-11-25 10:49:50.105 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:52 compute-0 podman[247094]: 2025-11-25 10:49:52.976976484 +0000 UTC m=+0.090818103 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 10:49:53 compute-0 nova_compute[189381]: 2025-11-25 10:49:53.057 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:55 compute-0 nova_compute[189381]: 2025-11-25 10:49:55.107 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:56 compute-0 podman[247113]: 2025-11-25 10:49:56.955796293 +0000 UTC m=+0.070109332 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Nov 25 10:49:56 compute-0 podman[247114]: 2025-11-25 10:49:56.97846305 +0000 UTC m=+0.074101006 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:49:58 compute-0 nova_compute[189381]: 2025-11-25 10:49:58.059 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:49:59 compute-0 podman[203557]: time="2025-11-25T10:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:49:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:49:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 25 10:50:00 compute-0 podman[247158]: 2025-11-25 10:50:00.056482846 +0000 UTC m=+0.160204064 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:50:00 compute-0 nova_compute[189381]: 2025-11-25 10:50:00.108 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:01 compute-0 openstack_network_exporter[205722]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:50:01 compute-0 openstack_network_exporter[205722]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:50:01 compute-0 openstack_network_exporter[205722]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:50:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:50:01 compute-0 openstack_network_exporter[205722]: ERROR   10:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:50:01 compute-0 openstack_network_exporter[205722]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:50:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:50:01 compute-0 nova_compute[189381]: 2025-11-25 10:50:01.955 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764067786.9544723, 2386416a-4434-4f8a-836b-0c58a5808f62 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:50:01 compute-0 nova_compute[189381]: 2025-11-25 10:50:01.956 189385 INFO nova.compute.manager [-] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] VM Stopped (Lifecycle Event)
Nov 25 10:50:01 compute-0 podman[247184]: 2025-11-25 10:50:01.965944169 +0000 UTC m=+0.077937135 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 10:50:01 compute-0 nova_compute[189381]: 2025-11-25 10:50:01.983 189385 DEBUG nova.compute.manager [None req-b7406980-15bd-49ff-b04b-3e428c8bc3bd - - - - - -] [instance: 2386416a-4434-4f8a-836b-0c58a5808f62] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:50:03 compute-0 nova_compute[189381]: 2025-11-25 10:50:03.061 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:05 compute-0 nova_compute[189381]: 2025-11-25 10:50:05.110 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:06 compute-0 sshd-session[246490]: Received disconnect from 38.102.83.176 port 40344:11: disconnected by user
Nov 25 10:50:06 compute-0 sshd-session[246490]: Disconnected from user zuul 38.102.83.176 port 40344
Nov 25 10:50:06 compute-0 sshd-session[246478]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:50:06 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Nov 25 10:50:06 compute-0 systemd-logind[822]: Session 30 logged out. Waiting for processes to exit.
Nov 25 10:50:06 compute-0 systemd-logind[822]: Removed session 30.
Nov 25 10:50:06 compute-0 podman[247203]: 2025-11-25 10:50:06.508070874 +0000 UTC m=+0.091134243 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:50:08 compute-0 nova_compute[189381]: 2025-11-25 10:50:08.076 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:10 compute-0 nova_compute[189381]: 2025-11-25 10:50:10.112 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:13 compute-0 nova_compute[189381]: 2025-11-25 10:50:13.037 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:13 compute-0 nova_compute[189381]: 2025-11-25 10:50:13.079 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:14 compute-0 podman[247226]: 2025-11-25 10:50:14.760011638 +0000 UTC m=+0.062203197 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute)
Nov 25 10:50:14 compute-0 podman[247227]: 2025-11-25 10:50:14.807993677 +0000 UTC m=+0.104635717 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.050 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.115 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:15 compute-0 rsyslogd[236628]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:50:15 compute-0 rsyslogd[236628]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.149 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.216 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.217 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.282 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.283 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.341 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.342 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.408 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.417 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.486 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.487 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.558 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.559 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.622 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.624 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:50:15 compute-0 nova_compute[189381]: 2025-11-25 10:50:15.688 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.032 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.033 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4905MB free_disk=72.15747833251953GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.034 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.034 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.148 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.148 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.149 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.149 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.216 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.231 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.259 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:50:16 compute-0 nova_compute[189381]: 2025-11-25 10:50:16.260 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:50:16 compute-0 podman[247288]: 2025-11-25 10:50:16.983103031 +0000 UTC m=+0.100000905 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Nov 25 10:50:18 compute-0 nova_compute[189381]: 2025-11-25 10:50:18.081 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:19 compute-0 nova_compute[189381]: 2025-11-25 10:50:19.260 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:20 compute-0 nova_compute[189381]: 2025-11-25 10:50:20.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:20 compute-0 nova_compute[189381]: 2025-11-25 10:50:20.119 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:22 compute-0 nova_compute[189381]: 2025-11-25 10:50:22.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:22 compute-0 nova_compute[189381]: 2025-11-25 10:50:22.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:50:22 compute-0 nova_compute[189381]: 2025-11-25 10:50:22.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:50:22 compute-0 nova_compute[189381]: 2025-11-25 10:50:22.655 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:50:22 compute-0 nova_compute[189381]: 2025-11-25 10:50:22.656 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:50:22 compute-0 nova_compute[189381]: 2025-11-25 10:50:22.656 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:50:22 compute-0 nova_compute[189381]: 2025-11-25 10:50:22.657 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:50:23 compute-0 nova_compute[189381]: 2025-11-25 10:50:23.082 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:23 compute-0 sshd-session[247309]: Accepted publickey for zuul from 38.102.83.176 port 43154 ssh2: RSA SHA256:AY70hpNEXJR6fAK1y9JiAEJ1ZGByytYoO+9neWZvmFk
Nov 25 10:50:23 compute-0 systemd-logind[822]: New session 31 of user zuul.
Nov 25 10:50:23 compute-0 systemd[1]: Started Session 31 of User zuul.
Nov 25 10:50:23 compute-0 sshd-session[247309]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 10:50:23 compute-0 podman[247311]: 2025-11-25 10:50:23.771458512 +0000 UTC m=+0.061381993 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 25 10:50:24 compute-0 nova_compute[189381]: 2025-11-25 10:50:24.235 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:50:24 compute-0 nova_compute[189381]: 2025-11-25 10:50:24.254 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:50:24 compute-0 nova_compute[189381]: 2025-11-25 10:50:24.255 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:50:24 compute-0 nova_compute[189381]: 2025-11-25 10:50:24.256 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:24 compute-0 nova_compute[189381]: 2025-11-25 10:50:24.256 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:24 compute-0 nova_compute[189381]: 2025-11-25 10:50:24.256 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:24 compute-0 nova_compute[189381]: 2025-11-25 10:50:24.257 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:50:24 compute-0 sudo[247505]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubiezyayhhbwsikcyprwescbmyfoaxzb ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764067823.8511453-60531-185470097316245/AnsiballZ_command.py'
Nov 25 10:50:24 compute-0 sudo[247505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:50:24 compute-0 python3[247507]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:24 compute-0 sudo[247505]: pam_unix(sudo:session): session closed for user root
Nov 25 10:50:25 compute-0 nova_compute[189381]: 2025-11-25 10:50:25.121 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:27 compute-0 sshd-session[247546]: Connection closed by 150.95.85.24 port 40400
Nov 25 10:50:27 compute-0 podman[247548]: 2025-11-25 10:50:27.949147706 +0000 UTC m=+0.062992479 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:50:27 compute-0 podman[247547]: 2025-11-25 10:50:27.956886517 +0000 UTC m=+0.073347304 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41)
Nov 25 10:50:28 compute-0 nova_compute[189381]: 2025-11-25 10:50:28.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:28 compute-0 nova_compute[189381]: 2025-11-25 10:50:28.085 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:29 compute-0 podman[203557]: time="2025-11-25T10:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:50:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:50:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 25 10:50:30 compute-0 nova_compute[189381]: 2025-11-25 10:50:30.123 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:30 compute-0 podman[247587]: 2025-11-25 10:50:30.982367633 +0000 UTC m=+0.097464623 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:50:31 compute-0 openstack_network_exporter[205722]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:50:31 compute-0 openstack_network_exporter[205722]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:50:31 compute-0 openstack_network_exporter[205722]: ERROR   10:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:50:31 compute-0 openstack_network_exporter[205722]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:50:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:50:31 compute-0 openstack_network_exporter[205722]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:50:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:50:32 compute-0 sudo[247802]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkijezevnksipneqpkjiowaihwazoess ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764067831.7899792-60696-130850336777536/AnsiballZ_command.py'
Nov 25 10:50:32 compute-0 sudo[247802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:50:32 compute-0 podman[247760]: 2025-11-25 10:50:32.324725408 +0000 UTC m=+0.071917564 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 10:50:32 compute-0 python3[247807]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:32 compute-0 sudo[247802]: pam_unix(sudo:session): session closed for user root
Nov 25 10:50:33 compute-0 nova_compute[189381]: 2025-11-25 10:50:33.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:50:33 compute-0 nova_compute[189381]: 2025-11-25 10:50:33.088 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:35 compute-0 nova_compute[189381]: 2025-11-25 10:50:35.125 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:50:36.052 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:50:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:50:36.053 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:50:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:50:36.053 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:50:36 compute-0 podman[247847]: 2025-11-25 10:50:36.944998126 +0000 UTC m=+0.062207986 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:50:38 compute-0 nova_compute[189381]: 2025-11-25 10:50:38.090 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:40 compute-0 nova_compute[189381]: 2025-11-25 10:50:40.127 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:41 compute-0 sudo[248043]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laqmhzmsnogzyuzikmsmnmpyaqnlecoh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764067841.2359564-60851-100625429401728/AnsiballZ_command.py'
Nov 25 10:50:41 compute-0 sudo[248043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:50:41 compute-0 python3[248045]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:42 compute-0 sudo[248043]: pam_unix(sudo:session): session closed for user root
Nov 25 10:50:43 compute-0 nova_compute[189381]: 2025-11-25 10:50:43.091 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:44 compute-0 podman[248085]: 2025-11-25 10:50:44.996987506 +0000 UTC m=+0.100383626 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:50:45 compute-0 podman[248084]: 2025-11-25 10:50:45.032000375 +0000 UTC m=+0.137572507 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 25 10:50:45 compute-0 nova_compute[189381]: 2025-11-25 10:50:45.128 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:47 compute-0 podman[248119]: 2025-11-25 10:50:47.968489453 +0000 UTC m=+0.085645306 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release=1214.1726694543)
Nov 25 10:50:48 compute-0 nova_compute[189381]: 2025-11-25 10:50:48.094 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:50 compute-0 nova_compute[189381]: 2025-11-25 10:50:50.131 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:53 compute-0 nova_compute[189381]: 2025-11-25 10:50:53.095 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:53 compute-0 podman[248140]: 2025-11-25 10:50:53.946211997 +0000 UTC m=+0.056919185 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 25 10:50:55 compute-0 nova_compute[189381]: 2025-11-25 10:50:55.133 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:56 compute-0 sudo[248332]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpxtvbzgbssbgazineuxakgckkqgdbti ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764067855.7784355-61071-174318309057209/AnsiballZ_command.py'
Nov 25 10:50:56 compute-0 sudo[248332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 10:50:56 compute-0 python3[248334]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 25 10:50:56 compute-0 sudo[248332]: pam_unix(sudo:session): session closed for user root
Nov 25 10:50:58 compute-0 nova_compute[189381]: 2025-11-25 10:50:58.096 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:50:58 compute-0 podman[248372]: 2025-11-25 10:50:58.979205175 +0000 UTC m=+0.080989773 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 10:50:58 compute-0 podman[248373]: 2025-11-25 10:50:58.995537801 +0000 UTC m=+0.094278612 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:50:59 compute-0 podman[203557]: time="2025-11-25T10:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:50:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:50:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Nov 25 10:51:00 compute-0 nova_compute[189381]: 2025-11-25 10:51:00.135 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:01 compute-0 openstack_network_exporter[205722]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:51:01 compute-0 openstack_network_exporter[205722]: ERROR   10:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:51:01 compute-0 openstack_network_exporter[205722]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:51:01 compute-0 openstack_network_exporter[205722]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:51:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:51:01 compute-0 openstack_network_exporter[205722]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:51:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:51:01 compute-0 podman[248412]: 2025-11-25 10:51:01.990350441 +0000 UTC m=+0.100847359 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 10:51:02 compute-0 podman[248437]: 2025-11-25 10:51:02.959100112 +0000 UTC m=+0.071426960 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:51:03 compute-0 nova_compute[189381]: 2025-11-25 10:51:03.099 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.335 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.335 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.345 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'name': 'vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.350 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.350 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:51:03.351084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.356 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.360 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.361 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.362 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.362 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:51:03.361897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.363 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:51:03.363301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.389 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.420 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.421 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.422 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.422 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.422 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.423 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.424 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.424 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.425 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.426 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:51:03.422335) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.426 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:51:03.424238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.427 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:51:03.426124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.428 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/cpu volume: 38090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.428 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 49030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:51:03.427673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.429 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.430 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:51:03.429437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:51:03.430750) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.454 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.455 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.455 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.478 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.479 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.479 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.480 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:51:03.480366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.552 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.552 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.553 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.623 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.624 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.624 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.625 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.625 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 567192189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.625 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 97341337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:51:03.625375) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.626 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 75612085 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.626 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.627 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.627 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.629 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.630 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.630 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.631 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.632 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.632 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:51:03.629698) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.633 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.635 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.635 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.636 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.636 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.637 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:51:03.636758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.637 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.637 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.638 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.638 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.639 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.639 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.641 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:51:03.640944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.641 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.641 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.641 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.642 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.642 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.643 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.643 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 1590671507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:51:03.643432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.644 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 14157667 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.644 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.644 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.644 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.644 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.645 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.645 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.645 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.646 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:51:03.645921) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.646 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.646 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.647 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.647 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.647 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.647 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.648 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.648 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.648 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:51:03.647273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.648 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.649 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.649 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.649 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.650 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.650 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:51:03.650046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.650 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.651 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.651 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.651 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.651 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.651 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.651 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.652 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:51:03.651481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.652 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.652 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.652 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.653 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.653 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.653 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.653 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.654 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.654 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.654 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:51:03.654156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:51:03.655494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.656 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.656 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.656 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.657 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.657 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.657 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.657 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.658 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.658 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.658 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:51:03.656982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:51:03.658120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.659 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:51:03.659446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.660 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.660 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.661 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:51:03.662 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:51:05 compute-0 nova_compute[189381]: 2025-11-25 10:51:05.138 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:07 compute-0 podman[248457]: 2025-11-25 10:51:07.948182415 +0000 UTC m=+0.062827144 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:51:08 compute-0 nova_compute[189381]: 2025-11-25 10:51:08.100 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:10 compute-0 nova_compute[189381]: 2025-11-25 10:51:10.140 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:13 compute-0 nova_compute[189381]: 2025-11-25 10:51:13.104 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:14 compute-0 nova_compute[189381]: 2025-11-25 10:51:14.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:14 compute-0 sshd-session[248481]: Connection closed by authenticating user root 171.244.51.45 port 57942 [preauth]
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.050 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.122 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.142 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.184 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.185 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.250 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.251 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.312 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.313 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.383 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.390 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.459 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.460 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.526 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.527 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.588 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.589 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.652 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:51:15 compute-0 podman[248507]: 2025-11-25 10:51:15.982097648 +0000 UTC m=+0.087306893 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:51:15 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.998 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:15.999 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4946MB free_disk=72.15749740600586GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.000 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.000 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:51:16 compute-0 podman[248508]: 2025-11-25 10:51:16.007137472 +0000 UTC m=+0.111927646 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.084 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.084 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.084 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.084 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.148 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.163 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.165 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:51:16 compute-0 nova_compute[189381]: 2025-11-25 10:51:16.166 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:51:18 compute-0 nova_compute[189381]: 2025-11-25 10:51:18.106 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:18 compute-0 podman[248547]: 2025-11-25 10:51:18.963677901 +0000 UTC m=+0.072341016 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, version=9.4, io.buildah.version=1.29.0, release-0.7.12=)
Nov 25 10:51:20 compute-0 nova_compute[189381]: 2025-11-25 10:51:20.145 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:21 compute-0 nova_compute[189381]: 2025-11-25 10:51:21.167 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:22 compute-0 nova_compute[189381]: 2025-11-25 10:51:22.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:23 compute-0 nova_compute[189381]: 2025-11-25 10:51:23.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:23 compute-0 nova_compute[189381]: 2025-11-25 10:51:23.109 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:24 compute-0 nova_compute[189381]: 2025-11-25 10:51:24.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:24 compute-0 nova_compute[189381]: 2025-11-25 10:51:24.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:51:24 compute-0 nova_compute[189381]: 2025-11-25 10:51:24.729 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:51:24 compute-0 nova_compute[189381]: 2025-11-25 10:51:24.730 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:51:24 compute-0 nova_compute[189381]: 2025-11-25 10:51:24.730 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:51:24 compute-0 podman[248569]: 2025-11-25 10:51:24.975197457 +0000 UTC m=+0.085773559 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:51:25 compute-0 nova_compute[189381]: 2025-11-25 10:51:25.147 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:26 compute-0 nova_compute[189381]: 2025-11-25 10:51:26.132 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updating instance_info_cache with network_info: [{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:51:26 compute-0 nova_compute[189381]: 2025-11-25 10:51:26.146 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:51:26 compute-0 nova_compute[189381]: 2025-11-25 10:51:26.146 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:51:26 compute-0 nova_compute[189381]: 2025-11-25 10:51:26.147 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:26 compute-0 nova_compute[189381]: 2025-11-25 10:51:26.147 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:26 compute-0 nova_compute[189381]: 2025-11-25 10:51:26.148 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:51:28 compute-0 nova_compute[189381]: 2025-11-25 10:51:28.110 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:29 compute-0 podman[203557]: time="2025-11-25T10:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:51:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:51:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 25 10:51:29 compute-0 podman[248589]: 2025-11-25 10:51:29.948079931 +0000 UTC m=+0.062371102 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:51:29 compute-0 podman[248588]: 2025-11-25 10:51:29.975852703 +0000 UTC m=+0.094055345 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal)
Nov 25 10:51:30 compute-0 nova_compute[189381]: 2025-11-25 10:51:30.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:51:30 compute-0 nova_compute[189381]: 2025-11-25 10:51:30.151 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:31 compute-0 openstack_network_exporter[205722]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:51:31 compute-0 openstack_network_exporter[205722]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:51:31 compute-0 openstack_network_exporter[205722]: ERROR   10:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:51:31 compute-0 openstack_network_exporter[205722]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:51:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:51:31 compute-0 openstack_network_exporter[205722]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:51:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:51:33 compute-0 podman[248631]: 2025-11-25 10:51:33.009073371 +0000 UTC m=+0.123910448 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:51:33 compute-0 podman[248656]: 2025-11-25 10:51:33.086962764 +0000 UTC m=+0.075126615 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd)
Nov 25 10:51:33 compute-0 nova_compute[189381]: 2025-11-25 10:51:33.112 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:34 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 10:51:35 compute-0 nova_compute[189381]: 2025-11-25 10:51:35.153 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:51:36.053 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:51:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:51:36.054 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:51:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:51:36.054 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:51:38 compute-0 nova_compute[189381]: 2025-11-25 10:51:38.114 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:38 compute-0 podman[248676]: 2025-11-25 10:51:38.930852135 +0000 UTC m=+0.049879445 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:51:40 compute-0 nova_compute[189381]: 2025-11-25 10:51:40.156 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:43 compute-0 nova_compute[189381]: 2025-11-25 10:51:43.116 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:45 compute-0 nova_compute[189381]: 2025-11-25 10:51:45.158 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:46 compute-0 podman[248700]: 2025-11-25 10:51:46.958705127 +0000 UTC m=+0.067346063 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.3)
Nov 25 10:51:46 compute-0 podman[248699]: 2025-11-25 10:51:46.972663455 +0000 UTC m=+0.085243354 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 10:51:48 compute-0 nova_compute[189381]: 2025-11-25 10:51:48.119 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:49 compute-0 podman[248740]: 2025-11-25 10:51:49.944154958 +0000 UTC m=+0.058953983 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=kepler, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 10:51:50 compute-0 nova_compute[189381]: 2025-11-25 10:51:50.162 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:53 compute-0 nova_compute[189381]: 2025-11-25 10:51:53.120 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:55 compute-0 nova_compute[189381]: 2025-11-25 10:51:55.164 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:55 compute-0 podman[248759]: 2025-11-25 10:51:55.956639553 +0000 UTC m=+0.073813088 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 10:51:56 compute-0 sshd-session[247323]: Received disconnect from 38.102.83.176 port 43154:11: disconnected by user
Nov 25 10:51:56 compute-0 sshd-session[247323]: Disconnected from user zuul 38.102.83.176 port 43154
Nov 25 10:51:56 compute-0 sshd-session[247309]: pam_unix(sshd:session): session closed for user zuul
Nov 25 10:51:56 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Nov 25 10:51:56 compute-0 systemd[1]: session-31.scope: Consumed 3.795s CPU time.
Nov 25 10:51:56 compute-0 systemd-logind[822]: Session 31 logged out. Waiting for processes to exit.
Nov 25 10:51:56 compute-0 systemd-logind[822]: Removed session 31.
Nov 25 10:51:58 compute-0 nova_compute[189381]: 2025-11-25 10:51:58.122 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:51:59 compute-0 podman[203557]: time="2025-11-25T10:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:51:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:51:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 25 10:52:00 compute-0 nova_compute[189381]: 2025-11-25 10:52:00.167 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:00 compute-0 podman[248776]: 2025-11-25 10:52:00.975911697 +0000 UTC m=+0.091095071 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:52:00 compute-0 podman[248777]: 2025-11-25 10:52:00.987266561 +0000 UTC m=+0.091553454 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:52:01 compute-0 openstack_network_exporter[205722]: ERROR   10:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:52:01 compute-0 openstack_network_exporter[205722]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:52:01 compute-0 openstack_network_exporter[205722]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:52:01 compute-0 openstack_network_exporter[205722]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:52:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:52:01 compute-0 openstack_network_exporter[205722]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:52:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:52:03 compute-0 nova_compute[189381]: 2025-11-25 10:52:03.124 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:03 compute-0 podman[248823]: 2025-11-25 10:52:03.964801772 +0000 UTC m=+0.078625385 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:52:04 compute-0 podman[248822]: 2025-11-25 10:52:04.017703182 +0000 UTC m=+0.123628580 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 10:52:05 compute-0 nova_compute[189381]: 2025-11-25 10:52:05.170 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:08 compute-0 nova_compute[189381]: 2025-11-25 10:52:08.126 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:09 compute-0 podman[248864]: 2025-11-25 10:52:09.949990332 +0000 UTC m=+0.069327079 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:52:10 compute-0 nova_compute[189381]: 2025-11-25 10:52:10.171 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:13 compute-0 nova_compute[189381]: 2025-11-25 10:52:13.128 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:15 compute-0 nova_compute[189381]: 2025-11-25 10:52:15.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:15 compute-0 nova_compute[189381]: 2025-11-25 10:52:15.174 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.059 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.060 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.061 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.140 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.204 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.205 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.264 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.265 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.323 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.324 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.388 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.394 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.453 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.454 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.515 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.517 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.574 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.575 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.646 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:52:17 compute-0 podman[248913]: 2025-11-25 10:52:17.963080042 +0000 UTC m=+0.070810813 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 10:52:17 compute-0 podman[248912]: 2025-11-25 10:52:17.989174096 +0000 UTC m=+0.098550184 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true)
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.991 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.992 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4916MB free_disk=72.15749740600586GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.993 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:52:17 compute-0 nova_compute[189381]: 2025-11-25 10:52:17.993 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.090 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.090 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.090 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.091 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.131 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.162 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.200 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.201 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:52:18 compute-0 nova_compute[189381]: 2025-11-25 10:52:18.202 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:52:20 compute-0 nova_compute[189381]: 2025-11-25 10:52:20.176 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:20 compute-0 podman[248953]: 2025-11-25 10:52:20.957267763 +0000 UTC m=+0.067365023 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 25 10:52:21 compute-0 nova_compute[189381]: 2025-11-25 10:52:21.202 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:22 compute-0 nova_compute[189381]: 2025-11-25 10:52:22.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:23 compute-0 nova_compute[189381]: 2025-11-25 10:52:23.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:23 compute-0 nova_compute[189381]: 2025-11-25 10:52:23.133 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:24 compute-0 nova_compute[189381]: 2025-11-25 10:52:24.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:24 compute-0 nova_compute[189381]: 2025-11-25 10:52:24.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:52:24 compute-0 nova_compute[189381]: 2025-11-25 10:52:24.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:52:24 compute-0 nova_compute[189381]: 2025-11-25 10:52:24.792 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:52:24 compute-0 nova_compute[189381]: 2025-11-25 10:52:24.793 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:52:24 compute-0 nova_compute[189381]: 2025-11-25 10:52:24.793 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 10:52:24 compute-0 nova_compute[189381]: 2025-11-25 10:52:24.793 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:52:25 compute-0 nova_compute[189381]: 2025-11-25 10:52:25.179 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:26 compute-0 podman[248973]: 2025-11-25 10:52:26.999695393 +0000 UTC m=+0.104844674 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 10:52:27 compute-0 nova_compute[189381]: 2025-11-25 10:52:27.007 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [{"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:52:27 compute-0 nova_compute[189381]: 2025-11-25 10:52:27.019 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-31174924-a3e8-4662-baad-ac9aa49c01ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:52:27 compute-0 nova_compute[189381]: 2025-11-25 10:52:27.020 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 10:52:27 compute-0 nova_compute[189381]: 2025-11-25 10:52:27.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:27 compute-0 nova_compute[189381]: 2025-11-25 10:52:27.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:27 compute-0 nova_compute[189381]: 2025-11-25 10:52:27.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:52:28 compute-0 nova_compute[189381]: 2025-11-25 10:52:28.137 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:29 compute-0 podman[203557]: time="2025-11-25T10:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:52:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:52:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 25 10:52:30 compute-0 nova_compute[189381]: 2025-11-25 10:52:30.182 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:31 compute-0 openstack_network_exporter[205722]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:52:31 compute-0 openstack_network_exporter[205722]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:52:31 compute-0 openstack_network_exporter[205722]: ERROR   10:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:52:31 compute-0 openstack_network_exporter[205722]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:52:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:52:31 compute-0 openstack_network_exporter[205722]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:52:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:52:31 compute-0 podman[248992]: 2025-11-25 10:52:31.965889211 +0000 UTC m=+0.065109069 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:52:31 compute-0 podman[248991]: 2025-11-25 10:52:31.997574756 +0000 UTC m=+0.105637527 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc.)
Nov 25 10:52:32 compute-0 nova_compute[189381]: 2025-11-25 10:52:32.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:33 compute-0 nova_compute[189381]: 2025-11-25 10:52:33.141 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:34 compute-0 podman[249037]: 2025-11-25 10:52:34.986604932 +0000 UTC m=+0.098802131 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=multipathd)
Nov 25 10:52:35 compute-0 podman[249036]: 2025-11-25 10:52:35.011520403 +0000 UTC m=+0.128179479 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:52:35 compute-0 nova_compute[189381]: 2025-11-25 10:52:35.184 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:36 compute-0 nova_compute[189381]: 2025-11-25 10:52:36.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:52:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:52:36.054 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:52:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:52:36.055 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:52:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:52:36.056 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:52:38 compute-0 nova_compute[189381]: 2025-11-25 10:52:38.143 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:40 compute-0 nova_compute[189381]: 2025-11-25 10:52:40.186 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:40 compute-0 podman[249082]: 2025-11-25 10:52:40.313940678 +0000 UTC m=+0.084231875 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:52:43 compute-0 nova_compute[189381]: 2025-11-25 10:52:43.145 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:45 compute-0 nova_compute[189381]: 2025-11-25 10:52:45.188 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:48 compute-0 nova_compute[189381]: 2025-11-25 10:52:48.146 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:48 compute-0 podman[249106]: 2025-11-25 10:52:48.948034225 +0000 UTC m=+0.064266627 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251118)
Nov 25 10:52:48 compute-0 podman[249105]: 2025-11-25 10:52:48.95241059 +0000 UTC m=+0.070321161 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 10:52:50 compute-0 nova_compute[189381]: 2025-11-25 10:52:50.190 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:51 compute-0 podman[249146]: 2025-11-25 10:52:51.964861331 +0000 UTC m=+0.070920178 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:52:53 compute-0 nova_compute[189381]: 2025-11-25 10:52:53.148 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:55 compute-0 nova_compute[189381]: 2025-11-25 10:52:55.193 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:57 compute-0 podman[249165]: 2025-11-25 10:52:57.933664801 +0000 UTC m=+0.054572559 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 10:52:58 compute-0 nova_compute[189381]: 2025-11-25 10:52:58.152 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:52:59 compute-0 podman[203557]: time="2025-11-25T10:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:52:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:52:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 25 10:53:00 compute-0 nova_compute[189381]: 2025-11-25 10:53:00.196 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:01 compute-0 openstack_network_exporter[205722]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:53:01 compute-0 openstack_network_exporter[205722]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:53:01 compute-0 openstack_network_exporter[205722]: ERROR   10:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:53:01 compute-0 openstack_network_exporter[205722]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:53:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:53:01 compute-0 openstack_network_exporter[205722]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:53:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:53:02 compute-0 podman[249184]: 2025-11-25 10:53:02.960969168 +0000 UTC m=+0.078808695 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Nov 25 10:53:02 compute-0 podman[249185]: 2025-11-25 10:53:02.9902627 +0000 UTC m=+0.103363381 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:53:03 compute-0 nova_compute[189381]: 2025-11-25 10:53:03.153 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.336 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.336 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.344 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'name': 'vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {'metering.server_group': 'd1a74954-729e-4b7f-a26d-ccdc925aa15b'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.348 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'name': 'test_0', 'flavor': {'id': '8b869036-db8e-4fd3-b57a-e59e272f3c73', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'd3f57a9d-2502-43be-9afd-d2b6e1c15c08'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'user_id': 'af7a147d86064a21a94066f72173bba2', 'hostId': '5a89ff79501acf514ea7dfac9023ad6d2b7766f06a2ead2ad542f3dd', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.348 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T10:53:03.349328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.354 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.357 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.358 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.358 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.359 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.359 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T10:53:03.358738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T10:53:03.359875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.379 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/memory.usage volume: 48.890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.404 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/memory.usage volume: 48.8359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.406 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.406 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.407 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T10:53:03.405920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.407 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T10:53:03.407281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.408 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/cpu volume: 39330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T10:53:03.408361) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T10:53:03.409508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.409 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/cpu volume: 50320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T10:53:03.410642) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.411 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.411 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T10:53:03.411752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.431 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.432 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.432 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.451 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.451 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.451 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.452 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T10:53:03.452782) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.510 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.511 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.511 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.567 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.568 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.568 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.568 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.569 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.569 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.569 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.569 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 567192189 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T10:53:03.569410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.570 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 97341337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.570 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.latency volume: 75612085 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.570 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 2805011252 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.570 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 220536874 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.570 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.latency volume: 115114005 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T10:53:03.571653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.571 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.572 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.572 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.572 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.572 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.573 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.573 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.573 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.573 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T10:53:03.573826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.574 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.574 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.574 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.574 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.574 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.575 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.576 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T10:53:03.575981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.576 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.576 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.576 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.577 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.577 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.577 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T10:53:03.578124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.578 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 1590671507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.578 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 14157667 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.578 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.579 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 6628828994 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.579 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 11732398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.579 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.579 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T10:53:03.580126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.580 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.580 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T10:53:03.581349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.581 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.582 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.582 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 227 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.582 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.582 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T10:53:03.583526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.584 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.584 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.585 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.585 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.585 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.585 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.586 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.586 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.586 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T10:53:03.584848) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T10:53:03.587646) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.589 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T10:53:03.588863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.589 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.590 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.591 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.591 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.593 14 DEBUG ceilometer.compute.pollsters [-] 83ab44b9-7ddb-4994-9415-20b7dd9c081c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T10:53:03.590416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T10:53:03.591463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.593 14 DEBUG ceilometer.compute.pollsters [-] 31174924-a3e8-4662-baad-ac9aa49c01ab/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T10:53:03.593330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:53:03.597 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:53:05 compute-0 nova_compute[189381]: 2025-11-25 10:53:05.199 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:05 compute-0 podman[249228]: 2025-11-25 10:53:05.983148928 +0000 UTC m=+0.089945134 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 10:53:05 compute-0 podman[249227]: 2025-11-25 10:53:05.998938572 +0000 UTC m=+0.106059368 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 10:53:08 compute-0 nova_compute[189381]: 2025-11-25 10:53:08.156 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.053 189385 DEBUG nova.compute.manager [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received event network-changed-51ae07e4-a2d5-4ea0-8a58-37fa22980090 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.053 189385 DEBUG nova.compute.manager [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Refreshing instance network info cache due to event network-changed-51ae07e4-a2d5-4ea0-8a58-37fa22980090. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.053 189385 DEBUG oslo_concurrency.lockutils [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.054 189385 DEBUG oslo_concurrency.lockutils [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.054 189385 DEBUG nova.network.neutron [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Refreshing network info cache for port 51ae07e4-a2d5-4ea0-8a58-37fa22980090 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.338 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.339 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.339 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.340 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.341 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.342 189385 INFO nova.compute.manager [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Terminating instance
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.344 189385 DEBUG nova.compute.manager [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 10:53:09 compute-0 kernel: tap51ae07e4-a2 (unregistering): left promiscuous mode
Nov 25 10:53:09 compute-0 NetworkManager[56317]: <info>  [1764067989.3797] device (tap51ae07e4-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 10:53:09 compute-0 ovn_controller[97779]: 2025-11-25T10:53:09Z|00058|binding|INFO|Releasing lport 51ae07e4-a2d5-4ea0-8a58-37fa22980090 from this chassis (sb_readonly=0)
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.391 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 ovn_controller[97779]: 2025-11-25T10:53:09Z|00059|binding|INFO|Setting lport 51ae07e4-a2d5-4ea0-8a58-37fa22980090 down in Southbound
Nov 25 10:53:09 compute-0 ovn_controller[97779]: 2025-11-25T10:53:09Z|00060|binding|INFO|Removing iface tap51ae07e4-a2 ovn-installed in OVS
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.394 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.404 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.412 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:c3:2b 192.168.0.243'], port_security=['fa:16:3e:0e:c3:2b 192.168.0.243'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-6oeui4yfk7wn-wt3ljj7puxet-54ctihgnfppt-port-xs3cpczjijad', 'neutron:cidrs': '192.168.0.243/24', 'neutron:device_id': '83ab44b9-7ddb-4994-9415-20b7dd9c081c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-6oeui4yfk7wn-wt3ljj7puxet-54ctihgnfppt-port-xs3cpczjijad', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=51ae07e4-a2d5-4ea0-8a58-37fa22980090) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.414 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 51ae07e4-a2d5-4ea0-8a58-37fa22980090 in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e unbound from our chassis
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.415 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35870011-2c24-4719-a9ee-4942cd8ed50e
Nov 25 10:53:09 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 25 10:53:09 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 45.470s CPU time.
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.431 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[f4861fb4-927c-4017-a97c-ed52280af144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:09 compute-0 systemd-machined[155706]: Machine qemu-4-instance-00000004 terminated.
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.460 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7a866a-506c-4106-9238-6af8d2024dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.463 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8a9d4a-e166-4e3e-87fa-c822c1ee87a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.490 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[b0690e83-b680-4795-b932-bc62ab11312d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.507 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[3e837cc9-f24d-42f5-8e3f-6cbc8fed0fc7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35870011-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:64:2e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369752, 'reachable_time': 36927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249287, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.522 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d25195-dcf0-4385-a8ad-56ae893ea6a5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369763, 'tstamp': 369763}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249288, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap35870011-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 369766, 'tstamp': 369766}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249288, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.524 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.526 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.531 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.533 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35870011-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.533 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.534 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35870011-20, col_values=(('external_ids', {'iface-id': '20fbfb61-2dd4-482a-ae9e-a3e6b61ab9ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:53:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:09.534 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.631 189385 INFO nova.virt.libvirt.driver [-] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Instance destroyed successfully.
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.631 189385 DEBUG nova.objects.instance [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'resources' on Instance uuid 83ab44b9-7ddb-4994-9415-20b7dd9c081c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.646 189385 DEBUG nova.virt.libvirt.vif [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T10:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4yfk7wn-wt3ljj7puxet-54ctihgnfppt-vnf-zyrkdio57cum',id=4,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T10:42:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='d1a74954-729e-4b7f-a26d-ccdc925aa15b'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-ljsskeb5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T10:42:32Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09Mjk0MjIzNjQ4OTcxODQ5Mjk5NT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Nov 25 10:53:09 compute-0 nova_compute[189381]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09Mjk0MjIzNjQ4OTcxODQ5Mjk5NT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTI5NDIyMzY0ODk3MTg0OTI5OTU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0yOTQyMjM2NDg5NzE4NDkyOTk1PT0tLQo=',user_id='af7a147d86064a21a94066f72173bba2',uuid=83ab44b9-7ddb-4994-9415-20b7dd9c081c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.646 189385 DEBUG nova.network.os_vif_util [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.647 189385 DEBUG nova.network.os_vif_util [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:c3:2b,bridge_name='br-int',has_traffic_filtering=True,id=51ae07e4-a2d5-4ea0-8a58-37fa22980090,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap51ae07e4-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.647 189385 DEBUG os_vif [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:c3:2b,bridge_name='br-int',has_traffic_filtering=True,id=51ae07e4-a2d5-4ea0-8a58-37fa22980090,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap51ae07e4-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.649 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.649 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51ae07e4-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.651 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.653 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.656 189385 INFO os_vif [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:c3:2b,bridge_name='br-int',has_traffic_filtering=True,id=51ae07e4-a2d5-4ea0-8a58-37fa22980090,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap51ae07e4-a2')
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.657 189385 INFO nova.virt.libvirt.driver [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Deleting instance files /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c_del
Nov 25 10:53:09 compute-0 nova_compute[189381]: 2025-11-25 10:53:09.657 189385 INFO nova.virt.libvirt.driver [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Deletion of /var/lib/nova/instances/83ab44b9-7ddb-4994-9415-20b7dd9c081c_del complete
Nov 25 10:53:10 compute-0 nova_compute[189381]: 2025-11-25 10:53:10.088 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:10.088 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:53:10 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:10.090 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:53:10 compute-0 rsyslogd[236628]: message too long (8192) with configured size 8096, begin of message is: 2025-11-25 10:53:09.646 189385 DEBUG nova.virt.libvirt.vif [None req-b0c7a2ff-ff [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 25 10:53:10 compute-0 nova_compute[189381]: 2025-11-25 10:53:10.167 189385 INFO nova.compute.manager [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 25 10:53:10 compute-0 nova_compute[189381]: 2025-11-25 10:53:10.168 189385 DEBUG oslo.service.loopingcall [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 10:53:10 compute-0 nova_compute[189381]: 2025-11-25 10:53:10.168 189385 DEBUG nova.compute.manager [-] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 10:53:10 compute-0 nova_compute[189381]: 2025-11-25 10:53:10.168 189385 DEBUG nova.network.neutron [-] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 10:53:10 compute-0 podman[249311]: 2025-11-25 10:53:10.959628825 +0000 UTC m=+0.064898365 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:53:11 compute-0 nova_compute[189381]: 2025-11-25 10:53:11.106 189385 DEBUG nova.compute.manager [req-95e96e53-b85d-42ba-a547-9fbc337bdc6c req-f07c8373-d717-440a-90e2-c478ebbbaa5e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received event network-vif-unplugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:53:11 compute-0 nova_compute[189381]: 2025-11-25 10:53:11.106 189385 DEBUG oslo_concurrency.lockutils [req-95e96e53-b85d-42ba-a547-9fbc337bdc6c req-f07c8373-d717-440a-90e2-c478ebbbaa5e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:11 compute-0 nova_compute[189381]: 2025-11-25 10:53:11.106 189385 DEBUG oslo_concurrency.lockutils [req-95e96e53-b85d-42ba-a547-9fbc337bdc6c req-f07c8373-d717-440a-90e2-c478ebbbaa5e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:11 compute-0 nova_compute[189381]: 2025-11-25 10:53:11.106 189385 DEBUG oslo_concurrency.lockutils [req-95e96e53-b85d-42ba-a547-9fbc337bdc6c req-f07c8373-d717-440a-90e2-c478ebbbaa5e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:11 compute-0 nova_compute[189381]: 2025-11-25 10:53:11.107 189385 DEBUG nova.compute.manager [req-95e96e53-b85d-42ba-a547-9fbc337bdc6c req-f07c8373-d717-440a-90e2-c478ebbbaa5e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] No waiting events found dispatching network-vif-unplugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:53:11 compute-0 nova_compute[189381]: 2025-11-25 10:53:11.107 189385 DEBUG nova.compute.manager [req-95e96e53-b85d-42ba-a547-9fbc337bdc6c req-f07c8373-d717-440a-90e2-c478ebbbaa5e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received event network-vif-unplugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 10:53:12 compute-0 nova_compute[189381]: 2025-11-25 10:53:12.856 189385 DEBUG nova.network.neutron [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updated VIF entry in instance network info cache for port 51ae07e4-a2d5-4ea0-8a58-37fa22980090. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 10:53:12 compute-0 nova_compute[189381]: 2025-11-25 10:53:12.858 189385 DEBUG nova.network.neutron [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updating instance_info_cache with network_info: [{"id": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "address": "fa:16:3e:0e:c3:2b", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.243", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51ae07e4-a2", "ovs_interfaceid": "51ae07e4-a2d5-4ea0-8a58-37fa22980090", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.057 189385 DEBUG oslo_concurrency.lockutils [req-8b9ba965-c84b-446d-8a95-9ad26e9935b7 req-562f99f0-485d-48c6-a482-d695f72a8cd1 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-83ab44b9-7ddb-4994-9415-20b7dd9c081c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 10:53:13 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:13.093 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.158 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.297 189385 DEBUG nova.compute.manager [req-c83df786-fc69-4ec3-8b11-ce9c59282569 req-e1a6e621-02c9-4a3f-b16f-2dbce9387f7f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received event network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.297 189385 DEBUG oslo_concurrency.lockutils [req-c83df786-fc69-4ec3-8b11-ce9c59282569 req-e1a6e621-02c9-4a3f-b16f-2dbce9387f7f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.298 189385 DEBUG oslo_concurrency.lockutils [req-c83df786-fc69-4ec3-8b11-ce9c59282569 req-e1a6e621-02c9-4a3f-b16f-2dbce9387f7f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.298 189385 DEBUG oslo_concurrency.lockutils [req-c83df786-fc69-4ec3-8b11-ce9c59282569 req-e1a6e621-02c9-4a3f-b16f-2dbce9387f7f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.298 189385 DEBUG nova.compute.manager [req-c83df786-fc69-4ec3-8b11-ce9c59282569 req-e1a6e621-02c9-4a3f-b16f-2dbce9387f7f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] No waiting events found dispatching network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.299 189385 WARNING nova.compute.manager [req-c83df786-fc69-4ec3-8b11-ce9c59282569 req-e1a6e621-02c9-4a3f-b16f-2dbce9387f7f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Received unexpected event network-vif-plugged-51ae07e4-a2d5-4ea0-8a58-37fa22980090 for instance with vm_state active and task_state deleting.
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.344 189385 DEBUG nova.network.neutron [-] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.469 189385 INFO nova.compute.manager [-] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Took 3.30 seconds to deallocate network for instance.
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.534 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.534 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.689 189385 DEBUG nova.compute.provider_tree [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.704 189385 DEBUG nova.scheduler.client.report [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:53:13 compute-0 nova_compute[189381]: 2025-11-25 10:53:13.773 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:14 compute-0 nova_compute[189381]: 2025-11-25 10:53:14.651 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:15 compute-0 nova_compute[189381]: 2025-11-25 10:53:15.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:15 compute-0 nova_compute[189381]: 2025-11-25 10:53:15.138 189385 INFO nova.scheduler.client.report [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Deleted allocations for instance 83ab44b9-7ddb-4994-9415-20b7dd9c081c
Nov 25 10:53:15 compute-0 nova_compute[189381]: 2025-11-25 10:53:15.696 189385 DEBUG oslo_concurrency.lockutils [None req-b0c7a2ff-ff3d-40dd-b854-b2dd05f9557a af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "83ab44b9-7ddb-4994-9415-20b7dd9c081c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.047 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.047 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.048 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.128 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.200 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.201 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.262 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.263 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.322 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.323 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.388 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.711 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.712 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5118MB free_disk=72.17944717407227GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.713 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.713 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.778 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 31174924-a3e8-4662-baad-ac9aa49c01ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.778 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.779 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.830 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.853 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.854 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.883 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.915 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.969 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:53:17 compute-0 nova_compute[189381]: 2025-11-25 10:53:17.981 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:53:18 compute-0 nova_compute[189381]: 2025-11-25 10:53:18.025 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:53:18 compute-0 nova_compute[189381]: 2025-11-25 10:53:18.025 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:18 compute-0 nova_compute[189381]: 2025-11-25 10:53:18.159 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:19 compute-0 nova_compute[189381]: 2025-11-25 10:53:19.655 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:19 compute-0 podman[249353]: 2025-11-25 10:53:19.972937806 +0000 UTC m=+0.076644702 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:53:19 compute-0 podman[249352]: 2025-11-25 10:53:19.978775704 +0000 UTC m=+0.082615664 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 25 10:53:21 compute-0 nova_compute[189381]: 2025-11-25 10:53:21.026 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:22 compute-0 podman[249390]: 2025-11-25 10:53:22.998433333 +0000 UTC m=+0.104331728 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, managed_by=edpm_ansible)
Nov 25 10:53:23 compute-0 nova_compute[189381]: 2025-11-25 10:53:23.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:23 compute-0 nova_compute[189381]: 2025-11-25 10:53:23.162 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:24 compute-0 nova_compute[189381]: 2025-11-25 10:53:24.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:24 compute-0 nova_compute[189381]: 2025-11-25 10:53:24.629 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764067989.6279037, 83ab44b9-7ddb-4994-9415-20b7dd9c081c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:53:24 compute-0 nova_compute[189381]: 2025-11-25 10:53:24.630 189385 INFO nova.compute.manager [-] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] VM Stopped (Lifecycle Event)
Nov 25 10:53:24 compute-0 nova_compute[189381]: 2025-11-25 10:53:24.655 189385 DEBUG nova.compute.manager [None req-3e57e454-c40b-4e6d-8376-ea5b862ad4fc - - - - - -] [instance: 83ab44b9-7ddb-4994-9415-20b7dd9c081c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:53:24 compute-0 nova_compute[189381]: 2025-11-25 10:53:24.658 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:25 compute-0 nova_compute[189381]: 2025-11-25 10:53:25.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:25 compute-0 nova_compute[189381]: 2025-11-25 10:53:25.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:53:25 compute-0 nova_compute[189381]: 2025-11-25 10:53:25.043 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:53:26 compute-0 nova_compute[189381]: 2025-11-25 10:53:26.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:28 compute-0 nova_compute[189381]: 2025-11-25 10:53:28.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:28 compute-0 nova_compute[189381]: 2025-11-25 10:53:28.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:53:28 compute-0 nova_compute[189381]: 2025-11-25 10:53:28.164 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:28 compute-0 podman[249409]: 2025-11-25 10:53:28.950122532 +0000 UTC m=+0.064679189 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:53:29 compute-0 nova_compute[189381]: 2025-11-25 10:53:29.662 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:29 compute-0 podman[203557]: time="2025-11-25T10:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:53:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 10:53:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 25 10:53:31 compute-0 openstack_network_exporter[205722]: ERROR   10:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:53:31 compute-0 openstack_network_exporter[205722]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:53:31 compute-0 openstack_network_exporter[205722]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:53:31 compute-0 openstack_network_exporter[205722]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:53:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:53:31 compute-0 openstack_network_exporter[205722]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:53:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.788 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.789 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.789 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.789 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.790 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.791 189385 INFO nova.compute.manager [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Terminating instance
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.792 189385 DEBUG nova.compute.manager [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 10:53:32 compute-0 kernel: tapb6cf5c87-86 (unregistering): left promiscuous mode
Nov 25 10:53:32 compute-0 NetworkManager[56317]: <info>  [1764068012.8353] device (tapb6cf5c87-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 10:53:32 compute-0 ovn_controller[97779]: 2025-11-25T10:53:32Z|00061|binding|INFO|Releasing lport b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 from this chassis (sb_readonly=0)
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.844 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:32 compute-0 ovn_controller[97779]: 2025-11-25T10:53:32Z|00062|binding|INFO|Setting lport b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 down in Southbound
Nov 25 10:53:32 compute-0 ovn_controller[97779]: 2025-11-25T10:53:32Z|00063|binding|INFO|Removing iface tapb6cf5c87-86 ovn-installed in OVS
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.848 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:32 compute-0 nova_compute[189381]: 2025-11-25 10:53:32.881 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:32 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 25 10:53:32 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 2min 58.528s CPU time.
Nov 25 10:53:32 compute-0 systemd-machined[155706]: Machine qemu-1-instance-00000001 terminated.
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.008 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:39:09 192.168.0.95'], port_security=['fa:16:3e:f3:39:09 192.168.0.95'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.95/24', 'neutron:device_id': '31174924-a3e8-4662-baad-ac9aa49c01ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35870011-2c24-4719-a9ee-4942cd8ed50e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aef0c6ba1dd54218a527ced3f8d2a1be', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48d58879-e124-47b1-85de-2b7aab5c0e02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53f1de54-d9db-4691-881b-b04f921a948f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.009 106634 INFO neutron.agent.ovn.metadata.agent [-] Port b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 in datapath 35870011-2c24-4719-a9ee-4942cd8ed50e unbound from our chassis
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.010 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35870011-2c24-4719-a9ee-4942cd8ed50e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.013 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[74cc5199-d71d-4a7f-9eff-99872cebd9e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.013 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e namespace which is not needed anymore
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.096 189385 INFO nova.virt.libvirt.driver [-] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Instance destroyed successfully.
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.097 189385 DEBUG nova.objects.instance [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lazy-loading 'resources' on Instance uuid 31174924-a3e8-4662-baad-ac9aa49c01ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.116 189385 DEBUG nova.virt.libvirt.vif [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T10:32:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T10:33:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aef0c6ba1dd54218a527ced3f8d2a1be',ramdisk_id='',reservation_id='r-axvtrqdo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='d3f57a9d-2502-43be-9afd-d2b6e1c15c08',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T10:33:09Z,user_data=None,user_id='af7a147d86064a21a94066f72173bba2',uuid=31174924-a3e8-4662-baad-ac9aa49c01ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.116 189385 DEBUG nova.network.os_vif_util [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converting VIF {"id": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "address": "fa:16:3e:f3:39:09", "network": {"id": "35870011-2c24-4719-a9ee-4942cd8ed50e", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.95", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aef0c6ba1dd54218a527ced3f8d2a1be", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6cf5c87-86", "ovs_interfaceid": "b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.117 189385 DEBUG nova.network.os_vif_util [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f3:39:09,bridge_name='br-int',has_traffic_filtering=True,id=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cf5c87-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.117 189385 DEBUG os_vif [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:39:09,bridge_name='br-int',has_traffic_filtering=True,id=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cf5c87-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.118 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.118 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6cf5c87-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.120 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.122 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.125 189385 INFO os_vif [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f3:39:09,bridge_name='br-int',has_traffic_filtering=True,id=b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0,network=Network(35870011-2c24-4719-a9ee-4942cd8ed50e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6cf5c87-86')
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.125 189385 INFO nova.virt.libvirt.driver [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Deleting instance files /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab_del
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.126 189385 INFO nova.virt.libvirt.driver [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Deletion of /var/lib/nova/instances/31174924-a3e8-4662-baad-ac9aa49c01ab_del complete
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.166 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:33 compute-0 podman[249455]: 2025-11-25 10:53:33.186219952 +0000 UTC m=+0.093958610 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.189 189385 INFO nova.compute.manager [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Took 0.40 seconds to destroy the instance on the hypervisor.
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.190 189385 DEBUG oslo.service.loopingcall [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.190 189385 DEBUG nova.compute.manager [-] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.190 189385 DEBUG nova.network.neutron [-] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 10:53:33 compute-0 podman[249459]: 2025-11-25 10:53:33.210737856 +0000 UTC m=+0.120994277 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:53:33 compute-0 neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e[239716]: [NOTICE]   (239722) : haproxy version is 2.8.14-c23fe91
Nov 25 10:53:33 compute-0 neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e[239716]: [NOTICE]   (239722) : path to executable is /usr/sbin/haproxy
Nov 25 10:53:33 compute-0 neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e[239716]: [WARNING]  (239722) : Exiting Master process...
Nov 25 10:53:33 compute-0 neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e[239716]: [ALERT]    (239722) : Current worker (239724) exited with code 143 (Terminated)
Nov 25 10:53:33 compute-0 neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e[239716]: [WARNING]  (239722) : All workers exited. Exiting... (0)
Nov 25 10:53:33 compute-0 systemd[1]: libpod-b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2.scope: Deactivated successfully.
Nov 25 10:53:33 compute-0 podman[249503]: 2025-11-25 10:53:33.22723711 +0000 UTC m=+0.064472713 container died b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2-userdata-shm.mount: Deactivated successfully.
Nov 25 10:53:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7129039e231734bb2f6a7eb42e41cad78263e5272e33d43eb5afef027963ecd1-merged.mount: Deactivated successfully.
Nov 25 10:53:33 compute-0 podman[249503]: 2025-11-25 10:53:33.271990895 +0000 UTC m=+0.109226498 container cleanup b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:53:33 compute-0 systemd[1]: libpod-conmon-b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2.scope: Deactivated successfully.
Nov 25 10:53:33 compute-0 podman[249543]: 2025-11-25 10:53:33.338245699 +0000 UTC m=+0.042451821 container remove b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.346 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[21793b0a-1a29-4166-b5ae-ab50eef6ed99]: (4, ('Tue Nov 25 10:53:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e (b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2)\nb2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2\nTue Nov 25 10:53:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e (b2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2)\nb2d5dbc9115464f327942e99b806313977b2fa6cef687a58ce5dd8e4a15d17b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.348 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[5330ea82-e4be-4d06-9398-8a48da3c1bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.349 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35870011-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.351 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:33 compute-0 kernel: tap35870011-20: left promiscuous mode
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.354 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.359 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[243a3ef5-6507-4bec-bf65-fed95b5c7087]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:33 compute-0 nova_compute[189381]: 2025-11-25 10:53:33.371 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.379 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[98f3f0c8-77ed-45c8-8ab8-6f3e53a86bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.380 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[092a8452-2ead-4a55-bced-629b567d7346]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.396 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[79151ab9-9a5d-4d3c-ba2b-6fac01b99e82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 369741, 'reachable_time': 40549, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249557, 'error': None, 'target': 'ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d35870011\x2d2c24\x2d4719\x2da9ee\x2d4942cd8ed50e.mount: Deactivated successfully.
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.409 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35870011-2c24-4719-a9ee-4942cd8ed50e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 10:53:33 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:33.411 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b8816f-1f88-414b-b117-78c0614b9810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 10:53:34 compute-0 nova_compute[189381]: 2025-11-25 10:53:34.020 189385 DEBUG nova.compute.manager [req-8b7c113c-cfb8-4405-943f-dea7cb59f246 req-55ada525-07b8-412f-9a17-500ab29bb465 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-vif-unplugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:53:34 compute-0 nova_compute[189381]: 2025-11-25 10:53:34.021 189385 DEBUG oslo_concurrency.lockutils [req-8b7c113c-cfb8-4405-943f-dea7cb59f246 req-55ada525-07b8-412f-9a17-500ab29bb465 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:34 compute-0 nova_compute[189381]: 2025-11-25 10:53:34.021 189385 DEBUG oslo_concurrency.lockutils [req-8b7c113c-cfb8-4405-943f-dea7cb59f246 req-55ada525-07b8-412f-9a17-500ab29bb465 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:34 compute-0 nova_compute[189381]: 2025-11-25 10:53:34.022 189385 DEBUG oslo_concurrency.lockutils [req-8b7c113c-cfb8-4405-943f-dea7cb59f246 req-55ada525-07b8-412f-9a17-500ab29bb465 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:34 compute-0 nova_compute[189381]: 2025-11-25 10:53:34.022 189385 DEBUG nova.compute.manager [req-8b7c113c-cfb8-4405-943f-dea7cb59f246 req-55ada525-07b8-412f-9a17-500ab29bb465 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] No waiting events found dispatching network-vif-unplugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:53:34 compute-0 nova_compute[189381]: 2025-11-25 10:53:34.022 189385 DEBUG nova.compute.manager [req-8b7c113c-cfb8-4405-943f-dea7cb59f246 req-55ada525-07b8-412f-9a17-500ab29bb465 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-vif-unplugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 10:53:34 compute-0 nova_compute[189381]: 2025-11-25 10:53:34.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:53:35 compute-0 nova_compute[189381]: 2025-11-25 10:53:35.985 189385 DEBUG nova.network.neutron [-] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.002 189385 INFO nova.compute.manager [-] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Took 2.81 seconds to deallocate network for instance.
Nov 25 10:53:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:36.056 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:36.056 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:53:36.056 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.066 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.067 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.124 189385 DEBUG nova.compute.manager [req-4258a5d1-a454-4be0-9558-2ef5b4eaeae4 req-d23d5295-bf35-49ba-b333-e69b60c08ac6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.125 189385 DEBUG oslo_concurrency.lockutils [req-4258a5d1-a454-4be0-9558-2ef5b4eaeae4 req-d23d5295-bf35-49ba-b333-e69b60c08ac6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.126 189385 DEBUG oslo_concurrency.lockutils [req-4258a5d1-a454-4be0-9558-2ef5b4eaeae4 req-d23d5295-bf35-49ba-b333-e69b60c08ac6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.126 189385 DEBUG oslo_concurrency.lockutils [req-4258a5d1-a454-4be0-9558-2ef5b4eaeae4 req-d23d5295-bf35-49ba-b333-e69b60c08ac6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.126 189385 DEBUG nova.compute.manager [req-4258a5d1-a454-4be0-9558-2ef5b4eaeae4 req-d23d5295-bf35-49ba-b333-e69b60c08ac6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] No waiting events found dispatching network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.126 189385 WARNING nova.compute.manager [req-4258a5d1-a454-4be0-9558-2ef5b4eaeae4 req-d23d5295-bf35-49ba-b333-e69b60c08ac6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received unexpected event network-vif-plugged-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 for instance with vm_state deleted and task_state None.
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.127 189385 DEBUG nova.compute.manager [req-4258a5d1-a454-4be0-9558-2ef5b4eaeae4 req-d23d5295-bf35-49ba-b333-e69b60c08ac6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Received event network-vif-deleted-b6cf5c87-86ed-403f-91ab-cc0e9fe29ec0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.151 189385 DEBUG nova.compute.provider_tree [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.165 189385 DEBUG nova.scheduler.client.report [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.208 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.239 189385 INFO nova.scheduler.client.report [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Deleted allocations for instance 31174924-a3e8-4662-baad-ac9aa49c01ab
Nov 25 10:53:36 compute-0 nova_compute[189381]: 2025-11-25 10:53:36.404 189385 DEBUG oslo_concurrency.lockutils [None req-85f44ace-d8bf-4e24-9b32-c5207ee9c0bb af7a147d86064a21a94066f72173bba2 aef0c6ba1dd54218a527ced3f8d2a1be - - default default] Lock "31174924-a3e8-4662-baad-ac9aa49c01ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:53:37 compute-0 podman[249559]: 2025-11-25 10:53:37.021834267 +0000 UTC m=+0.123780606 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:53:37 compute-0 podman[249560]: 2025-11-25 10:53:37.036512679 +0000 UTC m=+0.124406024 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 10:53:38 compute-0 nova_compute[189381]: 2025-11-25 10:53:38.122 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:38 compute-0 nova_compute[189381]: 2025-11-25 10:53:38.168 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:42 compute-0 podman[249602]: 2025-11-25 10:53:42.013520701 +0000 UTC m=+0.116148458 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:53:43 compute-0 nova_compute[189381]: 2025-11-25 10:53:43.126 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:43 compute-0 nova_compute[189381]: 2025-11-25 10:53:43.171 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:48 compute-0 nova_compute[189381]: 2025-11-25 10:53:48.090 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068013.0894644, 31174924-a3e8-4662-baad-ac9aa49c01ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 10:53:48 compute-0 nova_compute[189381]: 2025-11-25 10:53:48.090 189385 INFO nova.compute.manager [-] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] VM Stopped (Lifecycle Event)
Nov 25 10:53:48 compute-0 nova_compute[189381]: 2025-11-25 10:53:48.113 189385 DEBUG nova.compute.manager [None req-fefb5d5f-8f42-4615-92c0-486acd5bd3e3 - - - - - -] [instance: 31174924-a3e8-4662-baad-ac9aa49c01ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 10:53:48 compute-0 nova_compute[189381]: 2025-11-25 10:53:48.131 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:48 compute-0 nova_compute[189381]: 2025-11-25 10:53:48.174 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:50 compute-0 podman[249626]: 2025-11-25 10:53:50.979950358 +0000 UTC m=+0.098458860 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 25 10:53:50 compute-0 podman[249627]: 2025-11-25 10:53:50.979964688 +0000 UTC m=+0.096543234 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:53:53 compute-0 nova_compute[189381]: 2025-11-25 10:53:53.136 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:53 compute-0 nova_compute[189381]: 2025-11-25 10:53:53.177 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:53 compute-0 podman[249668]: 2025-11-25 10:53:53.973946449 +0000 UTC m=+0.091550681 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container)
Nov 25 10:53:58 compute-0 nova_compute[189381]: 2025-11-25 10:53:58.141 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:58 compute-0 nova_compute[189381]: 2025-11-25 10:53:58.179 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:53:59 compute-0 podman[203557]: time="2025-11-25T10:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:53:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:53:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4331 "" "Go-http-client/1.1"
Nov 25 10:53:59 compute-0 podman[249688]: 2025-11-25 10:53:59.986436325 +0000 UTC m=+0.091163490 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:01 compute-0 openstack_network_exporter[205722]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:54:01 compute-0 openstack_network_exporter[205722]: ERROR   10:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:54:01 compute-0 openstack_network_exporter[205722]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:54:01 compute-0 openstack_network_exporter[205722]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:54:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:54:01 compute-0 openstack_network_exporter[205722]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:54:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:54:03 compute-0 nova_compute[189381]: 2025-11-25 10:54:03.144 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:03 compute-0 nova_compute[189381]: 2025-11-25 10:54:03.181 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:03 compute-0 podman[249709]: 2025-11-25 10:54:03.978195704 +0000 UTC m=+0.088456282 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:54:03 compute-0 podman[249708]: 2025-11-25 10:54:03.996523511 +0000 UTC m=+0.107275433 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, vcs-type=git, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:54:05 compute-0 ovn_controller[97779]: 2025-11-25T10:54:05Z|00064|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 25 10:54:06 compute-0 nova_compute[189381]: 2025-11-25 10:54:06.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:06 compute-0 nova_compute[189381]: 2025-11-25 10:54:06.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:54:08 compute-0 podman[249750]: 2025-11-25 10:54:08.001644298 +0000 UTC m=+0.111844233 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 10:54:08 compute-0 podman[249749]: 2025-11-25 10:54:08.065908004 +0000 UTC m=+0.168141931 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:54:08 compute-0 nova_compute[189381]: 2025-11-25 10:54:08.149 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:08 compute-0 nova_compute[189381]: 2025-11-25 10:54:08.184 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:12 compute-0 podman[249791]: 2025-11-25 10:54:12.985161627 +0000 UTC m=+0.090526401 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 10:54:13 compute-0 nova_compute[189381]: 2025-11-25 10:54:13.153 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:13 compute-0 nova_compute[189381]: 2025-11-25 10:54:13.186 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:17 compute-0 nova_compute[189381]: 2025-11-25 10:54:17.036 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:18 compute-0 nova_compute[189381]: 2025-11-25 10:54:18.159 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:18 compute-0 nova_compute[189381]: 2025-11-25 10:54:18.189 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.093 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.095 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.095 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.096 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.466 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.468 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5366MB free_disk=72.20188522338867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.468 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.469 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.660 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.661 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.731 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.750 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.867 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:54:19 compute-0 nova_compute[189381]: 2025-11-25 10:54:19.868 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:54:21 compute-0 podman[249818]: 2025-11-25 10:54:21.96012068 +0000 UTC m=+0.072954276 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 25 10:54:22 compute-0 podman[249819]: 2025-11-25 10:54:22.014657297 +0000 UTC m=+0.120324608 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:54:22 compute-0 nova_compute[189381]: 2025-11-25 10:54:22.868 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:23 compute-0 nova_compute[189381]: 2025-11-25 10:54:23.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:23 compute-0 nova_compute[189381]: 2025-11-25 10:54:23.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:54:23 compute-0 nova_compute[189381]: 2025-11-25 10:54:23.046 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:54:23 compute-0 nova_compute[189381]: 2025-11-25 10:54:23.163 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:23 compute-0 nova_compute[189381]: 2025-11-25 10:54:23.192 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:24 compute-0 podman[249856]: 2025-11-25 10:54:24.96600926 +0000 UTC m=+0.083958623 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, container_name=kepler, release=1214.1726694543, config_id=edpm, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, release-0.7.12=)
Nov 25 10:54:25 compute-0 nova_compute[189381]: 2025-11-25 10:54:25.046 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:26 compute-0 nova_compute[189381]: 2025-11-25 10:54:26.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:26 compute-0 nova_compute[189381]: 2025-11-25 10:54:26.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:26 compute-0 nova_compute[189381]: 2025-11-25 10:54:26.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:54:26 compute-0 nova_compute[189381]: 2025-11-25 10:54:26.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:54:26 compute-0 nova_compute[189381]: 2025-11-25 10:54:26.033 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:54:27 compute-0 nova_compute[189381]: 2025-11-25 10:54:27.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:28 compute-0 nova_compute[189381]: 2025-11-25 10:54:28.167 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:28 compute-0 nova_compute[189381]: 2025-11-25 10:54:28.195 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:29 compute-0 podman[203557]: time="2025-11-25T10:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:54:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:54:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 25 10:54:30 compute-0 nova_compute[189381]: 2025-11-25 10:54:30.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:30 compute-0 nova_compute[189381]: 2025-11-25 10:54:30.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:54:30 compute-0 podman[249876]: 2025-11-25 10:54:30.998714068 +0000 UTC m=+0.099274783 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 25 10:54:31 compute-0 openstack_network_exporter[205722]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:54:31 compute-0 openstack_network_exporter[205722]: ERROR   10:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:54:31 compute-0 openstack_network_exporter[205722]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:54:31 compute-0 openstack_network_exporter[205722]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:54:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:54:31 compute-0 openstack_network_exporter[205722]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:54:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:54:33 compute-0 nova_compute[189381]: 2025-11-25 10:54:33.170 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:33 compute-0 nova_compute[189381]: 2025-11-25 10:54:33.196 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:33 compute-0 nova_compute[189381]: 2025-11-25 10:54:33.208 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:34 compute-0 nova_compute[189381]: 2025-11-25 10:54:34.025 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:34 compute-0 podman[249893]: 2025-11-25 10:54:34.948974277 +0000 UTC m=+0.066315036 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 25 10:54:34 compute-0 podman[249894]: 2025-11-25 10:54:34.949155353 +0000 UTC m=+0.063346271 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:54:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:54:36.057 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:54:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:54:36.057 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:54:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:54:36.057 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:54:38 compute-0 nova_compute[189381]: 2025-11-25 10:54:38.172 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:38 compute-0 nova_compute[189381]: 2025-11-25 10:54:38.200 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:38 compute-0 podman[249936]: 2025-11-25 10:54:38.98393388 +0000 UTC m=+0.086484166 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:54:39 compute-0 podman[249935]: 2025-11-25 10:54:39.010867593 +0000 UTC m=+0.116853328 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:54:39 compute-0 nova_compute[189381]: 2025-11-25 10:54:39.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:40 compute-0 nova_compute[189381]: 2025-11-25 10:54:40.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:54:43 compute-0 nova_compute[189381]: 2025-11-25 10:54:43.176 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:43 compute-0 nova_compute[189381]: 2025-11-25 10:54:43.204 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:43 compute-0 podman[249980]: 2025-11-25 10:54:43.99151139 +0000 UTC m=+0.092003484 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:54:48 compute-0 nova_compute[189381]: 2025-11-25 10:54:48.180 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:48 compute-0 nova_compute[189381]: 2025-11-25 10:54:48.207 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:52 compute-0 podman[250007]: 2025-11-25 10:54:52.95395251 +0000 UTC m=+0.064969738 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Nov 25 10:54:52 compute-0 podman[250006]: 2025-11-25 10:54:52.96059416 +0000 UTC m=+0.070490286 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Nov 25 10:54:53 compute-0 nova_compute[189381]: 2025-11-25 10:54:53.186 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:53 compute-0 nova_compute[189381]: 2025-11-25 10:54:53.209 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:55 compute-0 podman[250043]: 2025-11-25 10:54:55.980715053 +0000 UTC m=+0.099787438 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, vcs-type=git, name=ubi9, container_name=kepler, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public)
Nov 25 10:54:58 compute-0 nova_compute[189381]: 2025-11-25 10:54:58.190 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:58 compute-0 nova_compute[189381]: 2025-11-25 10:54:58.211 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:54:59 compute-0 podman[203557]: time="2025-11-25T10:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:54:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:54:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Nov 25 10:55:01 compute-0 openstack_network_exporter[205722]: ERROR   10:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:55:01 compute-0 openstack_network_exporter[205722]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:55:01 compute-0 openstack_network_exporter[205722]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:55:01 compute-0 openstack_network_exporter[205722]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:55:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:55:01 compute-0 openstack_network_exporter[205722]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:55:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:55:01 compute-0 podman[250063]: 2025-11-25 10:55:01.950692509 +0000 UTC m=+0.062851317 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 25 10:55:03 compute-0 nova_compute[189381]: 2025-11-25 10:55:03.195 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:03 compute-0 nova_compute[189381]: 2025-11-25 10:55:03.215 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.336 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.337 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816fbf0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.348 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:55:03.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:55:03 compute-0 sshd-session[250083]: Connection closed by authenticating user root 171.244.51.45 port 42780 [preauth]
Nov 25 10:55:05 compute-0 podman[250086]: 2025-11-25 10:55:05.963405362 +0000 UTC m=+0.077160438 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:55:05 compute-0 podman[250087]: 2025-11-25 10:55:05.966017647 +0000 UTC m=+0.073126082 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:55:08 compute-0 nova_compute[189381]: 2025-11-25 10:55:08.199 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:08 compute-0 nova_compute[189381]: 2025-11-25 10:55:08.219 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:09 compute-0 podman[250127]: 2025-11-25 10:55:09.973925881 +0000 UTC m=+0.088120542 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 10:55:10 compute-0 podman[250126]: 2025-11-25 10:55:10.043720535 +0000 UTC m=+0.149569877 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 10:55:10 compute-0 nova_compute[189381]: 2025-11-25 10:55:10.208 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:13 compute-0 nova_compute[189381]: 2025-11-25 10:55:13.204 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:13 compute-0 nova_compute[189381]: 2025-11-25 10:55:13.222 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:14 compute-0 podman[250172]: 2025-11-25 10:55:14.756398905 +0000 UTC m=+0.068821398 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:55:18 compute-0 nova_compute[189381]: 2025-11-25 10:55:18.053 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:18 compute-0 nova_compute[189381]: 2025-11-25 10:55:18.212 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:18 compute-0 nova_compute[189381]: 2025-11-25 10:55:18.224 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.051 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.051 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.385 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.386 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5379MB free_disk=72.20188522338867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.386 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.386 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.443 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.444 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.537 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.550 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.556 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:55:20 compute-0 nova_compute[189381]: 2025-11-25 10:55:20.557 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:55:22 compute-0 nova_compute[189381]: 2025-11-25 10:55:22.558 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:23 compute-0 nova_compute[189381]: 2025-11-25 10:55:23.217 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:23 compute-0 nova_compute[189381]: 2025-11-25 10:55:23.226 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:23 compute-0 podman[250196]: 2025-11-25 10:55:23.997477331 +0000 UTC m=+0.109080965 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Nov 25 10:55:24 compute-0 podman[250197]: 2025-11-25 10:55:24.02424477 +0000 UTC m=+0.122946913 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Nov 25 10:55:26 compute-0 nova_compute[189381]: 2025-11-25 10:55:26.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:26 compute-0 nova_compute[189381]: 2025-11-25 10:55:26.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:55:26 compute-0 nova_compute[189381]: 2025-11-25 10:55:26.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:55:26 compute-0 nova_compute[189381]: 2025-11-25 10:55:26.040 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:55:26 compute-0 nova_compute[189381]: 2025-11-25 10:55:26.041 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:26 compute-0 podman[250234]: 2025-11-25 10:55:26.942509655 +0000 UTC m=+0.062025293 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, name=ubi9, container_name=kepler, config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 10:55:27 compute-0 nova_compute[189381]: 2025-11-25 10:55:27.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:27 compute-0 nova_compute[189381]: 2025-11-25 10:55:27.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:28 compute-0 nova_compute[189381]: 2025-11-25 10:55:28.222 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:28 compute-0 nova_compute[189381]: 2025-11-25 10:55:28.230 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:29 compute-0 podman[203557]: time="2025-11-25T10:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:55:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:55:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4329 "" "Go-http-client/1.1"
Nov 25 10:55:31 compute-0 nova_compute[189381]: 2025-11-25 10:55:31.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:31 compute-0 nova_compute[189381]: 2025-11-25 10:55:31.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:55:31 compute-0 openstack_network_exporter[205722]: ERROR   10:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:55:31 compute-0 openstack_network_exporter[205722]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:55:31 compute-0 openstack_network_exporter[205722]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:55:31 compute-0 openstack_network_exporter[205722]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:55:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:55:31 compute-0 openstack_network_exporter[205722]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:55:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:55:32 compute-0 podman[250253]: 2025-11-25 10:55:32.940176885 +0000 UTC m=+0.059599053 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 25 10:55:33 compute-0 nova_compute[189381]: 2025-11-25 10:55:33.226 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:33 compute-0 nova_compute[189381]: 2025-11-25 10:55:33.229 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:36 compute-0 nova_compute[189381]: 2025-11-25 10:55:36.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:55:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:55:36.058 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:55:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:55:36.059 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:55:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:55:36.059 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:55:36 compute-0 podman[250273]: 2025-11-25 10:55:36.939599377 +0000 UTC m=+0.056738031 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:55:36 compute-0 podman[250272]: 2025-11-25 10:55:36.954529316 +0000 UTC m=+0.073672258 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9)
Nov 25 10:55:38 compute-0 nova_compute[189381]: 2025-11-25 10:55:38.231 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:40 compute-0 podman[250316]: 2025-11-25 10:55:40.975398723 +0000 UTC m=+0.087673519 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 10:55:40 compute-0 podman[250315]: 2025-11-25 10:55:40.979574303 +0000 UTC m=+0.094713362 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 25 10:55:43 compute-0 nova_compute[189381]: 2025-11-25 10:55:43.233 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:55:44 compute-0 podman[250358]: 2025-11-25 10:55:44.96906433 +0000 UTC m=+0.085442116 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:55:48 compute-0 nova_compute[189381]: 2025-11-25 10:55:48.234 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:55:53 compute-0 nova_compute[189381]: 2025-11-25 10:55:53.236 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:55:54 compute-0 podman[250383]: 2025-11-25 10:55:54.962890787 +0000 UTC m=+0.071897426 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 10:55:54 compute-0 podman[250384]: 2025-11-25 10:55:54.982437789 +0000 UTC m=+0.089263375 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Nov 25 10:55:57 compute-0 podman[250420]: 2025-11-25 10:55:57.948840038 +0000 UTC m=+0.067306115 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, version=9.4, release-0.7.12=, vcs-type=git, release=1214.1726694543)
Nov 25 10:55:58 compute-0 nova_compute[189381]: 2025-11-25 10:55:58.238 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:55:58 compute-0 nova_compute[189381]: 2025-11-25 10:55:58.240 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:58 compute-0 nova_compute[189381]: 2025-11-25 10:55:58.240 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:55:58 compute-0 nova_compute[189381]: 2025-11-25 10:55:58.241 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:55:58 compute-0 nova_compute[189381]: 2025-11-25 10:55:58.241 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:55:58 compute-0 nova_compute[189381]: 2025-11-25 10:55:58.242 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:55:59 compute-0 podman[203557]: time="2025-11-25T10:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:55:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:55:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 25 10:56:01 compute-0 openstack_network_exporter[205722]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:56:01 compute-0 openstack_network_exporter[205722]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:56:01 compute-0 openstack_network_exporter[205722]: ERROR   10:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:56:01 compute-0 openstack_network_exporter[205722]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:56:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:56:01 compute-0 openstack_network_exporter[205722]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:56:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:56:03 compute-0 nova_compute[189381]: 2025-11-25 10:56:03.243 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:56:03 compute-0 nova_compute[189381]: 2025-11-25 10:56:03.244 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:03 compute-0 nova_compute[189381]: 2025-11-25 10:56:03.245 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:56:03 compute-0 nova_compute[189381]: 2025-11-25 10:56:03.245 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:56:03 compute-0 nova_compute[189381]: 2025-11-25 10:56:03.245 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:56:03 compute-0 nova_compute[189381]: 2025-11-25 10:56:03.247 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:03 compute-0 podman[250441]: 2025-11-25 10:56:03.968069557 +0000 UTC m=+0.085210279 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 25 10:56:07 compute-0 podman[250461]: 2025-11-25 10:56:07.960810316 +0000 UTC m=+0.070564078 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 10:56:08 compute-0 podman[250460]: 2025-11-25 10:56:08.012941954 +0000 UTC m=+0.115238952 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Nov 25 10:56:08 compute-0 nova_compute[189381]: 2025-11-25 10:56:08.246 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:11 compute-0 podman[250500]: 2025-11-25 10:56:11.976152575 +0000 UTC m=+0.083196291 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 10:56:12 compute-0 podman[250499]: 2025-11-25 10:56:12.005337953 +0000 UTC m=+0.119967507 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:56:13 compute-0 nova_compute[189381]: 2025-11-25 10:56:13.249 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:15 compute-0 podman[250543]: 2025-11-25 10:56:15.93810878 +0000 UTC m=+0.057163743 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.251 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.252 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.252 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.252 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.253 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.253 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:18 compute-0 nova_compute[189381]: 2025-11-25 10:56:18.255 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.132 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.132 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.132 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.133 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.462 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.463 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5367MB free_disk=72.20188522338867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.464 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.464 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.539 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.540 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.572 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.585 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.587 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:56:22 compute-0 nova_compute[189381]: 2025-11-25 10:56:22.588 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:56:23 compute-0 nova_compute[189381]: 2025-11-25 10:56:23.254 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:23 compute-0 nova_compute[189381]: 2025-11-25 10:56:23.256 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:25 compute-0 podman[250567]: 2025-11-25 10:56:25.964926815 +0000 UTC m=+0.075532001 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 10:56:25 compute-0 podman[250568]: 2025-11-25 10:56:25.982687055 +0000 UTC m=+0.089831682 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 10:56:26 compute-0 nova_compute[189381]: 2025-11-25 10:56:26.588 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:26 compute-0 nova_compute[189381]: 2025-11-25 10:56:26.589 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:56:26 compute-0 nova_compute[189381]: 2025-11-25 10:56:26.589 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:56:26 compute-0 nova_compute[189381]: 2025-11-25 10:56:26.603 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:56:26 compute-0 nova_compute[189381]: 2025-11-25 10:56:26.603 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:28 compute-0 nova_compute[189381]: 2025-11-25 10:56:28.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:28 compute-0 nova_compute[189381]: 2025-11-25 10:56:28.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:28 compute-0 nova_compute[189381]: 2025-11-25 10:56:28.255 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:28 compute-0 nova_compute[189381]: 2025-11-25 10:56:28.259 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:28 compute-0 podman[250607]: 2025-11-25 10:56:28.971166208 +0000 UTC m=+0.090391418 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, version=9.4)
Nov 25 10:56:29 compute-0 podman[203557]: time="2025-11-25T10:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:56:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:56:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 25 10:56:31 compute-0 nova_compute[189381]: 2025-11-25 10:56:31.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:31 compute-0 nova_compute[189381]: 2025-11-25 10:56:31.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:56:31 compute-0 openstack_network_exporter[205722]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:56:31 compute-0 openstack_network_exporter[205722]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:56:31 compute-0 openstack_network_exporter[205722]: ERROR   10:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:56:31 compute-0 openstack_network_exporter[205722]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:56:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:56:31 compute-0 openstack_network_exporter[205722]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:56:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:56:33 compute-0 nova_compute[189381]: 2025-11-25 10:56:33.258 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:34 compute-0 podman[250627]: 2025-11-25 10:56:34.975797169 +0000 UTC m=+0.092969511 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:56:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:56:36.060 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:56:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:56:36.060 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:56:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:56:36.060 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:56:38 compute-0 nova_compute[189381]: 2025-11-25 10:56:38.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:38 compute-0 nova_compute[189381]: 2025-11-25 10:56:38.262 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:38 compute-0 nova_compute[189381]: 2025-11-25 10:56:38.264 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:38 compute-0 podman[250646]: 2025-11-25 10:56:38.972735809 +0000 UTC m=+0.080584137 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:56:38 compute-0 podman[250645]: 2025-11-25 10:56:38.984081274 +0000 UTC m=+0.094092404 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Nov 25 10:56:41 compute-0 nova_compute[189381]: 2025-11-25 10:56:41.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:56:42 compute-0 podman[250691]: 2025-11-25 10:56:42.945589188 +0000 UTC m=+0.061362604 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:56:42 compute-0 podman[250690]: 2025-11-25 10:56:42.966959662 +0000 UTC m=+0.086121205 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 10:56:43 compute-0 nova_compute[189381]: 2025-11-25 10:56:43.264 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:46 compute-0 podman[250732]: 2025-11-25 10:56:46.94229429 +0000 UTC m=+0.057478812 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:56:48 compute-0 nova_compute[189381]: 2025-11-25 10:56:48.267 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:48 compute-0 nova_compute[189381]: 2025-11-25 10:56:48.268 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:53 compute-0 nova_compute[189381]: 2025-11-25 10:56:53.270 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:56:56 compute-0 podman[250759]: 2025-11-25 10:56:56.949366659 +0000 UTC m=+0.064540876 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 10:56:56 compute-0 podman[250758]: 2025-11-25 10:56:56.972670588 +0000 UTC m=+0.090972694 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 10:56:58 compute-0 nova_compute[189381]: 2025-11-25 10:56:58.271 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:56:59 compute-0 podman[203557]: time="2025-11-25T10:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:56:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:56:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 25 10:56:59 compute-0 podman[250795]: 2025-11-25 10:56:59.953647125 +0000 UTC m=+0.069352663 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, container_name=kepler, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 25 10:57:01 compute-0 openstack_network_exporter[205722]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:57:01 compute-0 openstack_network_exporter[205722]: ERROR   10:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:57:01 compute-0 openstack_network_exporter[205722]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:57:01 compute-0 openstack_network_exporter[205722]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:57:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:57:01 compute-0 openstack_network_exporter[205722]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:57:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:57:03 compute-0 nova_compute[189381]: 2025-11-25 10:57:03.274 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.337 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.338 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f2409732240>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:57:03.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:57:05 compute-0 podman[250813]: 2025-11-25 10:57:05.940433353 +0000 UTC m=+0.059605763 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 10:57:08 compute-0 nova_compute[189381]: 2025-11-25 10:57:08.276 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:57:09 compute-0 podman[250830]: 2025-11-25 10:57:09.948850292 +0000 UTC m=+0.060227234 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:57:09 compute-0 podman[250829]: 2025-11-25 10:57:09.958957865 +0000 UTC m=+0.071979724 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal)
Nov 25 10:57:13 compute-0 nova_compute[189381]: 2025-11-25 10:57:13.279 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:13 compute-0 podman[250874]: 2025-11-25 10:57:13.98531695 +0000 UTC m=+0.096621808 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd)
Nov 25 10:57:13 compute-0 podman[250873]: 2025-11-25 10:57:13.990211562 +0000 UTC m=+0.101840039 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 25 10:57:17 compute-0 podman[250917]: 2025-11-25 10:57:17.943129398 +0000 UTC m=+0.056176827 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:57:18 compute-0 nova_compute[189381]: 2025-11-25 10:57:18.281 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:20 compute-0 nova_compute[189381]: 2025-11-25 10:57:20.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:23 compute-0 nova_compute[189381]: 2025-11-25 10:57:23.282 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:57:23 compute-0 nova_compute[189381]: 2025-11-25 10:57:23.285 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:23 compute-0 nova_compute[189381]: 2025-11-25 10:57:23.285 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:57:23 compute-0 nova_compute[189381]: 2025-11-25 10:57:23.285 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:57:23 compute-0 nova_compute[189381]: 2025-11-25 10:57:23.286 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:57:23 compute-0 nova_compute[189381]: 2025-11-25 10:57:23.287 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.056 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.058 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.419 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.421 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5386MB free_disk=72.20188522338867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.421 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.421 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.674 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.675 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.730 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.755 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.757 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:57:24 compute-0 nova_compute[189381]: 2025-11-25 10:57:24.758 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:57:26 compute-0 nova_compute[189381]: 2025-11-25 10:57:26.758 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:26 compute-0 nova_compute[189381]: 2025-11-25 10:57:26.758 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:57:26 compute-0 nova_compute[189381]: 2025-11-25 10:57:26.758 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:57:26 compute-0 nova_compute[189381]: 2025-11-25 10:57:26.793 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:57:26 compute-0 nova_compute[189381]: 2025-11-25 10:57:26.793 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:27 compute-0 podman[250943]: 2025-11-25 10:57:27.959893222 +0000 UTC m=+0.069536504 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4)
Nov 25 10:57:27 compute-0 podman[250944]: 2025-11-25 10:57:27.971084936 +0000 UTC m=+0.074018714 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:57:28 compute-0 nova_compute[189381]: 2025-11-25 10:57:28.051 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:28 compute-0 nova_compute[189381]: 2025-11-25 10:57:28.288 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:57:28 compute-0 nova_compute[189381]: 2025-11-25 10:57:28.289 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:28 compute-0 nova_compute[189381]: 2025-11-25 10:57:28.290 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:57:28 compute-0 nova_compute[189381]: 2025-11-25 10:57:28.290 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:57:28 compute-0 nova_compute[189381]: 2025-11-25 10:57:28.291 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:57:28 compute-0 nova_compute[189381]: 2025-11-25 10:57:28.293 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:29 compute-0 podman[203557]: time="2025-11-25T10:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:57:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:57:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Nov 25 10:57:30 compute-0 nova_compute[189381]: 2025-11-25 10:57:30.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:30 compute-0 podman[250980]: 2025-11-25 10:57:30.956171599 +0000 UTC m=+0.064922961 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release-0.7.12=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, build-date=2024-09-18T21:23:30)
Nov 25 10:57:31 compute-0 openstack_network_exporter[205722]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:57:31 compute-0 openstack_network_exporter[205722]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:57:31 compute-0 openstack_network_exporter[205722]: ERROR   10:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:57:31 compute-0 openstack_network_exporter[205722]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:57:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:57:31 compute-0 openstack_network_exporter[205722]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:57:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:57:33 compute-0 nova_compute[189381]: 2025-11-25 10:57:33.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:33 compute-0 nova_compute[189381]: 2025-11-25 10:57:33.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:57:33 compute-0 nova_compute[189381]: 2025-11-25 10:57:33.292 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:57:36.062 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:57:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:57:36.062 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:57:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:57:36.062 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:57:36 compute-0 podman[251001]: 2025-11-25 10:57:36.967805675 +0000 UTC m=+0.084340063 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 10:57:38 compute-0 nova_compute[189381]: 2025-11-25 10:57:38.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:57:38 compute-0 nova_compute[189381]: 2025-11-25 10:57:38.294 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:40 compute-0 podman[251020]: 2025-11-25 10:57:40.957569398 +0000 UTC m=+0.070270516 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git)
Nov 25 10:57:40 compute-0 podman[251021]: 2025-11-25 10:57:40.961879613 +0000 UTC m=+0.066796356 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:57:43 compute-0 nova_compute[189381]: 2025-11-25 10:57:43.297 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:57:44 compute-0 podman[251067]: 2025-11-25 10:57:44.746046957 +0000 UTC m=+0.073206500 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 25 10:57:44 compute-0 podman[251066]: 2025-11-25 10:57:44.768316951 +0000 UTC m=+0.102875609 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 10:57:48 compute-0 nova_compute[189381]: 2025-11-25 10:57:48.299 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:48 compute-0 podman[251111]: 2025-11-25 10:57:48.987525317 +0000 UTC m=+0.080796609 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:57:53 compute-0 nova_compute[189381]: 2025-11-25 10:57:53.302 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:58 compute-0 nova_compute[189381]: 2025-11-25 10:57:58.304 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:57:58 compute-0 nova_compute[189381]: 2025-11-25 10:57:58.306 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:58 compute-0 nova_compute[189381]: 2025-11-25 10:57:58.306 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:57:58 compute-0 nova_compute[189381]: 2025-11-25 10:57:58.306 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:57:58 compute-0 nova_compute[189381]: 2025-11-25 10:57:58.307 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:57:58 compute-0 nova_compute[189381]: 2025-11-25 10:57:58.307 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:57:58 compute-0 podman[251136]: 2025-11-25 10:57:58.973881982 +0000 UTC m=+0.085352871 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 10:57:58 compute-0 podman[251135]: 2025-11-25 10:57:58.97520123 +0000 UTC m=+0.090803139 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 10:57:59 compute-0 podman[203557]: time="2025-11-25T10:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:57:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:57:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Nov 25 10:58:01 compute-0 openstack_network_exporter[205722]: ERROR   10:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:58:01 compute-0 openstack_network_exporter[205722]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:58:01 compute-0 openstack_network_exporter[205722]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:58:01 compute-0 openstack_network_exporter[205722]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:58:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:58:01 compute-0 openstack_network_exporter[205722]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:58:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:58:01 compute-0 podman[251172]: 2025-11-25 10:58:01.947836793 +0000 UTC m=+0.060057629 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64, distribution-scope=public, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container)
Nov 25 10:58:03 compute-0 nova_compute[189381]: 2025-11-25 10:58:03.308 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:07 compute-0 podman[251192]: 2025-11-25 10:58:07.943678723 +0000 UTC m=+0.057640860 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 10:58:08 compute-0 nova_compute[189381]: 2025-11-25 10:58:08.309 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:58:08 compute-0 nova_compute[189381]: 2025-11-25 10:58:08.312 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:08 compute-0 nova_compute[189381]: 2025-11-25 10:58:08.312 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:58:08 compute-0 nova_compute[189381]: 2025-11-25 10:58:08.312 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:58:08 compute-0 nova_compute[189381]: 2025-11-25 10:58:08.313 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:58:08 compute-0 nova_compute[189381]: 2025-11-25 10:58:08.314 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:11 compute-0 podman[251210]: 2025-11-25 10:58:11.962417376 +0000 UTC m=+0.059560385 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:58:11 compute-0 podman[251209]: 2025-11-25 10:58:11.994562886 +0000 UTC m=+0.092579440 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Nov 25 10:58:13 compute-0 nova_compute[189381]: 2025-11-25 10:58:13.314 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:14 compute-0 podman[251254]: 2025-11-25 10:58:14.952142733 +0000 UTC m=+0.065612630 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 25 10:58:15 compute-0 podman[251253]: 2025-11-25 10:58:15.00731009 +0000 UTC m=+0.123988440 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 10:58:18 compute-0 nova_compute[189381]: 2025-11-25 10:58:18.316 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:58:19 compute-0 podman[251300]: 2025-11-25 10:58:19.944775984 +0000 UTC m=+0.060589975 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 10:58:22 compute-0 nova_compute[189381]: 2025-11-25 10:58:22.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:23 compute-0 nova_compute[189381]: 2025-11-25 10:58:23.318 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:58:23 compute-0 nova_compute[189381]: 2025-11-25 10:58:23.320 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:23 compute-0 nova_compute[189381]: 2025-11-25 10:58:23.320 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:58:23 compute-0 nova_compute[189381]: 2025-11-25 10:58:23.321 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:58:23 compute-0 nova_compute[189381]: 2025-11-25 10:58:23.321 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:58:23 compute-0 nova_compute[189381]: 2025-11-25 10:58:23.322 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:24 compute-0 nova_compute[189381]: 2025-11-25 10:58:24.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.053 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.053 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.054 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.054 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.407 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.408 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5389MB free_disk=72.20188522338867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.408 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.408 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.532 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.533 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.555 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.584 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.584 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.602 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.622 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.647 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.657 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.659 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:58:25 compute-0 nova_compute[189381]: 2025-11-25 10:58:25.659 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:58:26 compute-0 nova_compute[189381]: 2025-11-25 10:58:26.660 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:26 compute-0 nova_compute[189381]: 2025-11-25 10:58:26.660 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:58:26 compute-0 nova_compute[189381]: 2025-11-25 10:58:26.661 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:58:26 compute-0 nova_compute[189381]: 2025-11-25 10:58:26.754 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:58:27 compute-0 nova_compute[189381]: 2025-11-25 10:58:27.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:28 compute-0 nova_compute[189381]: 2025-11-25 10:58:28.323 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:29 compute-0 nova_compute[189381]: 2025-11-25 10:58:29.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:29 compute-0 podman[203557]: time="2025-11-25T10:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:58:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:58:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Nov 25 10:58:29 compute-0 podman[251325]: 2025-11-25 10:58:29.975485962 +0000 UTC m=+0.086296089 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 10:58:29 compute-0 podman[251324]: 2025-11-25 10:58:29.984148913 +0000 UTC m=+0.098831812 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 25 10:58:30 compute-0 nova_compute[189381]: 2025-11-25 10:58:30.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:31 compute-0 openstack_network_exporter[205722]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:58:31 compute-0 openstack_network_exporter[205722]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:58:31 compute-0 openstack_network_exporter[205722]: ERROR   10:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:58:31 compute-0 openstack_network_exporter[205722]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:58:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:58:31 compute-0 openstack_network_exporter[205722]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:58:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:58:32 compute-0 podman[251364]: 2025-11-25 10:58:32.948044773 +0000 UTC m=+0.067622089 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.component=ubi9-container, version=9.4, container_name=kepler, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, name=ubi9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc.)
Nov 25 10:58:33 compute-0 nova_compute[189381]: 2025-11-25 10:58:33.326 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:58:33 compute-0 nova_compute[189381]: 2025-11-25 10:58:33.327 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:33 compute-0 nova_compute[189381]: 2025-11-25 10:58:33.328 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:58:33 compute-0 nova_compute[189381]: 2025-11-25 10:58:33.328 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:58:33 compute-0 nova_compute[189381]: 2025-11-25 10:58:33.328 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:58:33 compute-0 nova_compute[189381]: 2025-11-25 10:58:33.329 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:35 compute-0 nova_compute[189381]: 2025-11-25 10:58:35.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:35 compute-0 nova_compute[189381]: 2025-11-25 10:58:35.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:58:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:58:36.063 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:58:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:58:36.064 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:58:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:58:36.064 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:58:38 compute-0 nova_compute[189381]: 2025-11-25 10:58:38.331 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:38 compute-0 podman[251383]: 2025-11-25 10:58:38.959492734 +0000 UTC m=+0.074371104 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 10:58:40 compute-0 nova_compute[189381]: 2025-11-25 10:58:40.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:42 compute-0 podman[251404]: 2025-11-25 10:58:42.948419414 +0000 UTC m=+0.058904196 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 10:58:42 compute-0 podman[251403]: 2025-11-25 10:58:42.958031452 +0000 UTC m=+0.072574622 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.buildah.version=1.33.7, architecture=x86_64)
Nov 25 10:58:43 compute-0 nova_compute[189381]: 2025-11-25 10:58:43.332 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:43 compute-0 nova_compute[189381]: 2025-11-25 10:58:43.333 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:45 compute-0 podman[251447]: 2025-11-25 10:58:45.976160901 +0000 UTC m=+0.087121703 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 10:58:46 compute-0 podman[251446]: 2025-11-25 10:58:46.013257965 +0000 UTC m=+0.126802692 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 10:58:46 compute-0 nova_compute[189381]: 2025-11-25 10:58:46.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:58:48 compute-0 nova_compute[189381]: 2025-11-25 10:58:48.332 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:48 compute-0 nova_compute[189381]: 2025-11-25 10:58:48.336 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:50 compute-0 podman[251491]: 2025-11-25 10:58:50.982289814 +0000 UTC m=+0.081915262 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 10:58:53 compute-0 nova_compute[189381]: 2025-11-25 10:58:53.334 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:53 compute-0 nova_compute[189381]: 2025-11-25 10:58:53.337 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:53 compute-0 sshd-session[251514]: Connection closed by authenticating user root 171.244.51.45 port 42732 [preauth]
Nov 25 10:58:58 compute-0 nova_compute[189381]: 2025-11-25 10:58:58.337 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:58 compute-0 nova_compute[189381]: 2025-11-25 10:58:58.339 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:58:59 compute-0 podman[203557]: time="2025-11-25T10:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:58:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:58:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 25 10:59:00 compute-0 podman[251516]: 2025-11-25 10:59:00.96496466 +0000 UTC m=+0.081736378 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 10:59:00 compute-0 podman[251517]: 2025-11-25 10:59:00.985575897 +0000 UTC m=+0.099504963 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 25 10:59:01 compute-0 openstack_network_exporter[205722]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:59:01 compute-0 openstack_network_exporter[205722]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:59:01 compute-0 openstack_network_exporter[205722]: ERROR   10:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:59:01 compute-0 openstack_network_exporter[205722]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:59:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:59:01 compute-0 openstack_network_exporter[205722]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:59:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.338 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.338 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 nova_compute[189381]: 2025-11-25 10:59:03.339 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 nova_compute[189381]: 2025-11-25 10:59:03.340 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 10:59:03.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 10:59:03 compute-0 podman[251557]: 2025-11-25 10:59:03.943007482 +0000 UTC m=+0.062409127 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, distribution-scope=public)
Nov 25 10:59:08 compute-0 nova_compute[189381]: 2025-11-25 10:59:08.340 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:08 compute-0 nova_compute[189381]: 2025-11-25 10:59:08.342 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:59:08.843 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 10:59:08 compute-0 nova_compute[189381]: 2025-11-25 10:59:08.845 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:59:08.845 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 10:59:09 compute-0 podman[251576]: 2025-11-25 10:59:09.945880125 +0000 UTC m=+0.058528245 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 10:59:10 compute-0 nova_compute[189381]: 2025-11-25 10:59:10.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:10 compute-0 nova_compute[189381]: 2025-11-25 10:59:10.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 10:59:13 compute-0 nova_compute[189381]: 2025-11-25 10:59:13.343 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:13 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:59:13.849 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 10:59:13 compute-0 podman[251595]: 2025-11-25 10:59:13.968318855 +0000 UTC m=+0.083166109 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41)
Nov 25 10:59:13 compute-0 podman[251596]: 2025-11-25 10:59:13.98715774 +0000 UTC m=+0.099668216 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:59:16 compute-0 podman[251638]: 2025-11-25 10:59:16.947331433 +0000 UTC m=+0.061682947 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 25 10:59:16 compute-0 podman[251637]: 2025-11-25 10:59:16.984886269 +0000 UTC m=+0.097675127 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 10:59:18 compute-0 nova_compute[189381]: 2025-11-25 10:59:18.344 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:59:18 compute-0 nova_compute[189381]: 2025-11-25 10:59:18.345 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:18 compute-0 nova_compute[189381]: 2025-11-25 10:59:18.345 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:59:18 compute-0 nova_compute[189381]: 2025-11-25 10:59:18.345 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:59:18 compute-0 nova_compute[189381]: 2025-11-25 10:59:18.346 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:59:18 compute-0 nova_compute[189381]: 2025-11-25 10:59:18.347 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:21 compute-0 podman[251683]: 2025-11-25 10:59:21.945774773 +0000 UTC m=+0.063129689 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:59:22 compute-0 nova_compute[189381]: 2025-11-25 10:59:22.036 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:23 compute-0 nova_compute[189381]: 2025-11-25 10:59:23.347 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:59:24 compute-0 nova_compute[189381]: 2025-11-25 10:59:24.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.049 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.368 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.369 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5375MB free_disk=72.201904296875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.370 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.370 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.488 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.489 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.610 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.625 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.627 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.627 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.628 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.628 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 10:59:25 compute-0 nova_compute[189381]: 2025-11-25 10:59:25.641 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 10:59:28 compute-0 nova_compute[189381]: 2025-11-25 10:59:28.350 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:28 compute-0 nova_compute[189381]: 2025-11-25 10:59:28.642 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:28 compute-0 nova_compute[189381]: 2025-11-25 10:59:28.643 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 10:59:28 compute-0 nova_compute[189381]: 2025-11-25 10:59:28.643 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 10:59:28 compute-0 nova_compute[189381]: 2025-11-25 10:59:28.663 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 10:59:29 compute-0 nova_compute[189381]: 2025-11-25 10:59:29.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:29 compute-0 podman[203557]: time="2025-11-25T10:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:59:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:59:29 compute-0 podman[203557]: @ - - [25/Nov/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4325 "" "Go-http-client/1.1"
Nov 25 10:59:31 compute-0 nova_compute[189381]: 2025-11-25 10:59:31.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:31 compute-0 nova_compute[189381]: 2025-11-25 10:59:31.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:31 compute-0 openstack_network_exporter[205722]: ERROR   10:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 10:59:31 compute-0 openstack_network_exporter[205722]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:59:31 compute-0 openstack_network_exporter[205722]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 10:59:31 compute-0 openstack_network_exporter[205722]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 10:59:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:59:31 compute-0 openstack_network_exporter[205722]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 10:59:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 10:59:31 compute-0 sshd-session[251705]: Received disconnect from 150.95.85.24 port 35382:11:  [preauth]
Nov 25 10:59:31 compute-0 sshd-session[251705]: Disconnected from authenticating user root 150.95.85.24 port 35382 [preauth]
Nov 25 10:59:31 compute-0 podman[251707]: 2025-11-25 10:59:31.952270293 +0000 UTC m=+0.067140564 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 25 10:59:31 compute-0 podman[251708]: 2025-11-25 10:59:31.96289134 +0000 UTC m=+0.073590961 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 25 10:59:33 compute-0 nova_compute[189381]: 2025-11-25 10:59:33.352 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 10:59:33 compute-0 nova_compute[189381]: 2025-11-25 10:59:33.355 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:33 compute-0 nova_compute[189381]: 2025-11-25 10:59:33.356 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 10:59:33 compute-0 nova_compute[189381]: 2025-11-25 10:59:33.356 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:59:33 compute-0 nova_compute[189381]: 2025-11-25 10:59:33.356 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 10:59:33 compute-0 nova_compute[189381]: 2025-11-25 10:59:33.359 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:34 compute-0 podman[251744]: 2025-11-25 10:59:34.988070753 +0000 UTC m=+0.107353189 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container)
Nov 25 10:59:36 compute-0 nova_compute[189381]: 2025-11-25 10:59:36.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:36 compute-0 nova_compute[189381]: 2025-11-25 10:59:36.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 10:59:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:59:36.064 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 10:59:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:59:36.064 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 10:59:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 10:59:36.064 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 10:59:38 compute-0 nova_compute[189381]: 2025-11-25 10:59:38.356 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:38 compute-0 nova_compute[189381]: 2025-11-25 10:59:38.359 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:39 compute-0 ovn_controller[97779]: 2025-11-25T10:59:39Z|00065|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 25 10:59:40 compute-0 podman[251763]: 2025-11-25 10:59:40.951853816 +0000 UTC m=+0.062603144 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 10:59:42 compute-0 nova_compute[189381]: 2025-11-25 10:59:42.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:42 compute-0 nova_compute[189381]: 2025-11-25 10:59:42.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 10:59:43 compute-0 nova_compute[189381]: 2025-11-25 10:59:43.357 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:43 compute-0 nova_compute[189381]: 2025-11-25 10:59:43.360 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:44 compute-0 podman[251782]: 2025-11-25 10:59:44.750091105 +0000 UTC m=+0.075331782 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Nov 25 10:59:44 compute-0 podman[251783]: 2025-11-25 10:59:44.757439858 +0000 UTC m=+0.074971411 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 10:59:47 compute-0 podman[251825]: 2025-11-25 10:59:47.956324879 +0000 UTC m=+0.071413198 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 25 10:59:47 compute-0 podman[251824]: 2025-11-25 10:59:47.990187859 +0000 UTC m=+0.107884654 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 25 10:59:48 compute-0 nova_compute[189381]: 2025-11-25 10:59:48.357 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:48 compute-0 nova_compute[189381]: 2025-11-25 10:59:48.361 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:52 compute-0 podman[251869]: 2025-11-25 10:59:52.941022431 +0000 UTC m=+0.059373770 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 10:59:53 compute-0 nova_compute[189381]: 2025-11-25 10:59:53.361 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:58 compute-0 nova_compute[189381]: 2025-11-25 10:59:58.361 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:58 compute-0 nova_compute[189381]: 2025-11-25 10:59:58.364 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 10:59:59 compute-0 podman[203557]: time="2025-11-25T10:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 10:59:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 10:59:59 compute-0 podman[203557]: @ - - [25/Nov/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 25 11:00:01 compute-0 openstack_network_exporter[205722]: ERROR   11:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:00:01 compute-0 openstack_network_exporter[205722]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:00:01 compute-0 openstack_network_exporter[205722]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:00:01 compute-0 openstack_network_exporter[205722]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:00:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:00:01 compute-0 openstack_network_exporter[205722]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:00:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:00:02 compute-0 podman[251893]: 2025-11-25 11:00:02.969188047 +0000 UTC m=+0.084654834 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 11:00:02 compute-0 podman[251894]: 2025-11-25 11:00:02.978263169 +0000 UTC m=+0.089103672 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:00:03 compute-0 nova_compute[189381]: 2025-11-25 11:00:03.364 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:03 compute-0 nova_compute[189381]: 2025-11-25 11:00:03.366 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:05 compute-0 podman[251930]: 2025-11-25 11:00:05.942968562 +0000 UTC m=+0.062538692 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=base rhel9, name=ubi9)
Nov 25 11:00:08 compute-0 nova_compute[189381]: 2025-11-25 11:00:08.367 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:08 compute-0 nova_compute[189381]: 2025-11-25 11:00:08.368 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:11 compute-0 podman[251950]: 2025-11-25 11:00:11.970636623 +0000 UTC m=+0.090178341 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 11:00:13 compute-0 nova_compute[189381]: 2025-11-25 11:00:13.368 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:00:13 compute-0 nova_compute[189381]: 2025-11-25 11:00:13.369 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:13 compute-0 nova_compute[189381]: 2025-11-25 11:00:13.369 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 25 11:00:13 compute-0 nova_compute[189381]: 2025-11-25 11:00:13.370 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 11:00:13 compute-0 nova_compute[189381]: 2025-11-25 11:00:13.370 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 25 11:00:13 compute-0 nova_compute[189381]: 2025-11-25 11:00:13.371 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:14 compute-0 podman[251970]: 2025-11-25 11:00:14.935881972 +0000 UTC m=+0.050937835 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:00:14 compute-0 podman[251969]: 2025-11-25 11:00:14.976731715 +0000 UTC m=+0.094584189 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Nov 25 11:00:18 compute-0 nova_compute[189381]: 2025-11-25 11:00:18.372 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:00:18 compute-0 podman[252011]: 2025-11-25 11:00:18.98727598 +0000 UTC m=+0.084616831 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:00:19 compute-0 podman[252010]: 2025-11-25 11:00:19.033934061 +0000 UTC m=+0.144748422 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:00:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:22.405 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:00:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:22.406 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:00:22 compute-0 nova_compute[189381]: 2025-11-25 11:00:22.408 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:23.132 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:00:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:23.133 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:00:23 compute-0 nova_compute[189381]: 2025-11-25 11:00:23.135 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:23 compute-0 nova_compute[189381]: 2025-11-25 11:00:23.374 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:23 compute-0 nova_compute[189381]: 2025-11-25 11:00:23.375 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:23 compute-0 podman[252056]: 2025-11-25 11:00:23.974527737 +0000 UTC m=+0.089119110 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:00:24 compute-0 nova_compute[189381]: 2025-11-25 11:00:24.038 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:25 compute-0 nova_compute[189381]: 2025-11-25 11:00:25.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:26.136 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:00:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:26.676 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.045 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.045 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.046 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.046 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.344 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.346 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5395MB free_disk=72.20188522338867GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.346 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:00:27 compute-0 nova_compute[189381]: 2025-11-25 11:00:27.346 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:00:28 compute-0 nova_compute[189381]: 2025-11-25 11:00:28.376 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:29 compute-0 podman[203557]: time="2025-11-25T11:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:00:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:00:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 25 11:00:31 compute-0 openstack_network_exporter[205722]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:00:31 compute-0 openstack_network_exporter[205722]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:00:31 compute-0 openstack_network_exporter[205722]: ERROR   11:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:00:31 compute-0 openstack_network_exporter[205722]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:00:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:00:31 compute-0 openstack_network_exporter[205722]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:00:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:00:31 compute-0 nova_compute[189381]: 2025-11-25 11:00:31.934 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:00:31 compute-0 nova_compute[189381]: 2025-11-25 11:00:31.934 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:00:31 compute-0 nova_compute[189381]: 2025-11-25 11:00:31.974 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:00:31 compute-0 nova_compute[189381]: 2025-11-25 11:00:31.991 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:00:31 compute-0 nova_compute[189381]: 2025-11-25 11:00:31.994 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:00:31 compute-0 nova_compute[189381]: 2025-11-25 11:00:31.995 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:00:33 compute-0 nova_compute[189381]: 2025-11-25 11:00:33.378 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:33 compute-0 nova_compute[189381]: 2025-11-25 11:00:33.380 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:33 compute-0 podman[252080]: 2025-11-25 11:00:33.969332987 +0000 UTC m=+0.080922564 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute)
Nov 25 11:00:33 compute-0 podman[252081]: 2025-11-25 11:00:33.983890088 +0000 UTC m=+0.080249974 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 25 11:00:33 compute-0 nova_compute[189381]: 2025-11-25 11:00:33.996 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:33 compute-0 nova_compute[189381]: 2025-11-25 11:00:33.997 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:33 compute-0 nova_compute[189381]: 2025-11-25 11:00:33.997 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:00:33 compute-0 nova_compute[189381]: 2025-11-25 11:00:33.997 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:00:34 compute-0 nova_compute[189381]: 2025-11-25 11:00:34.011 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 11:00:34 compute-0 nova_compute[189381]: 2025-11-25 11:00:34.012 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:34 compute-0 nova_compute[189381]: 2025-11-25 11:00:34.012 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:36.065 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:00:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:36.066 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:00:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:00:36.066 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:00:36 compute-0 nova_compute[189381]: 2025-11-25 11:00:36.560 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:36 compute-0 podman[252117]: 2025-11-25 11:00:36.949863118 +0000 UTC m=+0.068005939 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, architecture=x86_64, distribution-scope=public, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Nov 25 11:00:37 compute-0 nova_compute[189381]: 2025-11-25 11:00:37.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:37 compute-0 nova_compute[189381]: 2025-11-25 11:00:37.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:00:37 compute-0 nova_compute[189381]: 2025-11-25 11:00:37.333 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:38 compute-0 nova_compute[189381]: 2025-11-25 11:00:38.381 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:38 compute-0 nova_compute[189381]: 2025-11-25 11:00:38.852 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:39 compute-0 nova_compute[189381]: 2025-11-25 11:00:39.667 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:39 compute-0 nova_compute[189381]: 2025-11-25 11:00:39.711 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:40 compute-0 nova_compute[189381]: 2025-11-25 11:00:40.024 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:42 compute-0 nova_compute[189381]: 2025-11-25 11:00:42.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:42 compute-0 podman[252137]: 2025-11-25 11:00:42.935632336 +0000 UTC m=+0.054465667 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:00:43 compute-0 nova_compute[189381]: 2025-11-25 11:00:43.383 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:45 compute-0 podman[252156]: 2025-11-25 11:00:45.974022024 +0000 UTC m=+0.091830689 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Nov 25 11:00:45 compute-0 podman[252157]: 2025-11-25 11:00:45.976174436 +0000 UTC m=+0.090533281 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:00:46 compute-0 nova_compute[189381]: 2025-11-25 11:00:46.994 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:48 compute-0 nova_compute[189381]: 2025-11-25 11:00:48.199 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:48 compute-0 nova_compute[189381]: 2025-11-25 11:00:48.385 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:49 compute-0 podman[252202]: 2025-11-25 11:00:49.979036209 +0000 UTC m=+0.093763975 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 11:00:49 compute-0 podman[252203]: 2025-11-25 11:00:49.988370549 +0000 UTC m=+0.099155071 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:00:50 compute-0 nova_compute[189381]: 2025-11-25 11:00:50.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:00:51 compute-0 nova_compute[189381]: 2025-11-25 11:00:51.298 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:53 compute-0 nova_compute[189381]: 2025-11-25 11:00:53.387 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:54 compute-0 podman[252246]: 2025-11-25 11:00:54.971498155 +0000 UTC m=+0.081328286 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.505 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.506 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.535 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.677 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.677 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.689 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.690 189385 INFO nova.compute.claims [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.824 189385 DEBUG nova.compute.provider_tree [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.837 189385 DEBUG nova.scheduler.client.report [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.857 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.857 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.927 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.927 189385 DEBUG nova.network.neutron [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.943 189385 INFO nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:00:55 compute-0 nova_compute[189381]: 2025-11-25 11:00:55.965 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.067 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.068 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.069 189385 INFO nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Creating image(s)
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.070 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "/var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.070 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "/var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.071 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "/var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.071 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.072 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.301 189385 DEBUG nova.policy [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b821e5c3d70f4dc78d5de14f250d8590', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '81c1c4c8c73c403d8d6b430858c11434', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:00:56 compute-0 nova_compute[189381]: 2025-11-25 11:00:56.653 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:57 compute-0 nova_compute[189381]: 2025-11-25 11:00:57.025 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:57 compute-0 nova_compute[189381]: 2025-11-25 11:00:57.393 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:58 compute-0 nova_compute[189381]: 2025-11-25 11:00:58.389 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:00:58 compute-0 nova_compute[189381]: 2025-11-25 11:00:58.466 189385 DEBUG nova.network.neutron [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Successfully created port: 4b99e8ff-a6c5-4046-9654-a09c32b9646b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:00:59 compute-0 nova_compute[189381]: 2025-11-25 11:00:59.552 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:00:59 compute-0 nova_compute[189381]: 2025-11-25 11:00:59.609 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.part --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:00:59 compute-0 nova_compute[189381]: 2025-11-25 11:00:59.610 189385 DEBUG nova.virt.images [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] b388f0fb-bd04-4296-928b-44c706e0493e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 11:00:59 compute-0 nova_compute[189381]: 2025-11-25 11:00:59.646 189385 DEBUG nova.privsep.utils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 11:00:59 compute-0 nova_compute[189381]: 2025-11-25 11:00:59.648 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.part /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:00:59 compute-0 podman[203557]: time="2025-11-25T11:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:00:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:00:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4331 "" "Go-http-client/1.1"
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.070 189385 DEBUG nova.network.neutron [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Successfully updated port: 4b99e8ff-a6c5-4046-9654-a09c32b9646b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.104 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "refresh_cache-7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.105 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquired lock "refresh_cache-7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.106 189385 DEBUG nova.network.neutron [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.412 189385 DEBUG nova.network.neutron [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.723 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.part /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.converted" returned: 0 in 1.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.728 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.801 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b.converted --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.802 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 4.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.816 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.871 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.872 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.873 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.886 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.946 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.948 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.991 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.993 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:00 compute-0 nova_compute[189381]: 2025-11-25 11:01:00.994 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.062 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.064 189385 DEBUG nova.virt.disk.api [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Checking if we can resize image /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.064 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.124 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.125 189385 DEBUG nova.virt.disk.api [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Cannot resize image /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.125 189385 DEBUG nova.objects.instance [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lazy-loading 'migration_context' on Instance uuid 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.138 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.138 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Ensure instance console log exists: /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.139 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.140 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.140 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.144 189385 DEBUG nova.compute.manager [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Received event network-changed-4b99e8ff-a6c5-4046-9654-a09c32b9646b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.144 189385 DEBUG nova.compute.manager [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Refreshing instance network info cache due to event network-changed-4b99e8ff-a6c5-4046-9654-a09c32b9646b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:01:01 compute-0 nova_compute[189381]: 2025-11-25 11:01:01.145 189385 DEBUG oslo_concurrency.lockutils [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:01 compute-0 openstack_network_exporter[205722]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:01:01 compute-0 openstack_network_exporter[205722]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:01:01 compute-0 openstack_network_exporter[205722]: ERROR   11:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:01:01 compute-0 openstack_network_exporter[205722]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:01:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:01:01 compute-0 openstack_network_exporter[205722]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:01:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:01:01 compute-0 CROND[252303]: (root) CMD (run-parts /etc/cron.hourly)
Nov 25 11:01:01 compute-0 run-parts[252306]: (/etc/cron.hourly) starting 0anacron
Nov 25 11:01:01 compute-0 run-parts[252312]: (/etc/cron.hourly) finished 0anacron
Nov 25 11:01:01 compute-0 CROND[252302]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.111 189385 DEBUG nova.network.neutron [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Updating instance_info_cache with network_info: [{"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.130 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Releasing lock "refresh_cache-7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.131 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Instance network_info: |[{"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.131 189385 DEBUG oslo_concurrency.lockutils [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.131 189385 DEBUG nova.network.neutron [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Refreshing network info cache for port 4b99e8ff-a6c5-4046-9654-a09c32b9646b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.134 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Start _get_guest_xml network_info=[{"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.144 189385 WARNING nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.155 189385 DEBUG nova.virt.libvirt.host [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.155 189385 DEBUG nova.virt.libvirt.host [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.163 189385 DEBUG nova.virt.libvirt.host [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.164 189385 DEBUG nova.virt.libvirt.host [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.165 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.165 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.165 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.165 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.166 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.166 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.166 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.166 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.166 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.167 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.167 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.167 189385 DEBUG nova.virt.hardware [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.171 189385 DEBUG nova.virt.libvirt.vif [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-411749896',display_name='tempest-ServerAddressesTestJSON-server-411749896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-411749896',id=6,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81c1c4c8c73c403d8d6b430858c11434',ramdisk_id='',reservation_id='r-739xiapo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-10314999',owner_user_name='tempest-ServerAddressesTestJSON-10314999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:00:56Z,user_data=None,user_id='b821e5c3d70f4dc78d5de14f250d8590',uuid=7a2ec38f-d9cc-45cf-8338-fe982e25d7e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.172 189385 DEBUG nova.network.os_vif_util [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Converting VIF {"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.172 189385 DEBUG nova.network.os_vif_util [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:60:8b,bridge_name='br-int',has_traffic_filtering=True,id=4b99e8ff-a6c5-4046-9654-a09c32b9646b,network=Network(c5ab8414-3551-47a1-933c-4988048192d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b99e8ff-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.173 189385 DEBUG nova.objects.instance [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.191 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <uuid>7a2ec38f-d9cc-45cf-8338-fe982e25d7e2</uuid>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <name>instance-00000006</name>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <nova:name>tempest-ServerAddressesTestJSON-server-411749896</nova:name>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:01:02</nova:creationTime>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:user uuid="b821e5c3d70f4dc78d5de14f250d8590">tempest-ServerAddressesTestJSON-10314999-project-member</nova:user>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:project uuid="81c1c4c8c73c403d8d6b430858c11434">tempest-ServerAddressesTestJSON-10314999</nova:project>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         <nova:port uuid="4b99e8ff-a6c5-4046-9654-a09c32b9646b">
Nov 25 11:01:02 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <system>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <entry name="serial">7a2ec38f-d9cc-45cf-8338-fe982e25d7e2</entry>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <entry name="uuid">7a2ec38f-d9cc-45cf-8338-fe982e25d7e2</entry>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </system>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <os>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   </os>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <features>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   </features>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.config"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:40:60:8b"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <target dev="tap4b99e8ff-a6"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/console.log" append="off"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <video>
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </video>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:01:02 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:01:02 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:01:02 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:01:02 compute-0 nova_compute[189381]: </domain>
Nov 25 11:01:02 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.191 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Preparing to wait for external event network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.192 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.192 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.192 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.193 189385 DEBUG nova.virt.libvirt.vif [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-411749896',display_name='tempest-ServerAddressesTestJSON-server-411749896',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-411749896',id=6,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='81c1c4c8c73c403d8d6b430858c11434',ramdisk_id='',reservation_id='r-739xiapo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-10314999',owner_user_name='tempest-ServerAddressesTestJSON-10314999-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:00:56Z,user_data=None,user_id='b821e5c3d70f4dc78d5de14f250d8590',uuid=7a2ec38f-d9cc-45cf-8338-fe982e25d7e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.193 189385 DEBUG nova.network.os_vif_util [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Converting VIF {"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.193 189385 DEBUG nova.network.os_vif_util [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:60:8b,bridge_name='br-int',has_traffic_filtering=True,id=4b99e8ff-a6c5-4046-9654-a09c32b9646b,network=Network(c5ab8414-3551-47a1-933c-4988048192d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b99e8ff-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.194 189385 DEBUG os_vif [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:60:8b,bridge_name='br-int',has_traffic_filtering=True,id=4b99e8ff-a6c5-4046-9654-a09c32b9646b,network=Network(c5ab8414-3551-47a1-933c-4988048192d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b99e8ff-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.194 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.194 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.195 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.198 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.199 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b99e8ff-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.199 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4b99e8ff-a6, col_values=(('external_ids', {'iface-id': '4b99e8ff-a6c5-4046-9654-a09c32b9646b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:40:60:8b', 'vm-uuid': '7a2ec38f-d9cc-45cf-8338-fe982e25d7e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.201 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:02 compute-0 NetworkManager[56317]: <info>  [1764068462.2034] manager: (tap4b99e8ff-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.205 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.210 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.211 189385 INFO os_vif [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:60:8b,bridge_name='br-int',has_traffic_filtering=True,id=4b99e8ff-a6c5-4046-9654-a09c32b9646b,network=Network(c5ab8414-3551-47a1-933c-4988048192d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b99e8ff-a6')
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.466 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.467 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.467 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] No VIF found with MAC fa:16:3e:40:60:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:01:02 compute-0 nova_compute[189381]: 2025-11-25 11:01:02.467 189385 INFO nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Using config drive
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.298 189385 INFO nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Creating config drive at /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.config
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.304 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp49u0c9mc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.338 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.339 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.347 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 11:01:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:03.348 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.391 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.429 189385 DEBUG oslo_concurrency.processutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp49u0c9mc" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:03 compute-0 kernel: tap4b99e8ff-a6: entered promiscuous mode
Nov 25 11:01:03 compute-0 NetworkManager[56317]: <info>  [1764068463.4940] manager: (tap4b99e8ff-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.495 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:03 compute-0 ovn_controller[97779]: 2025-11-25T11:01:03Z|00066|binding|INFO|Claiming lport 4b99e8ff-a6c5-4046-9654-a09c32b9646b for this chassis.
Nov 25 11:01:03 compute-0 ovn_controller[97779]: 2025-11-25T11:01:03Z|00067|binding|INFO|4b99e8ff-a6c5-4046-9654-a09c32b9646b: Claiming fa:16:3e:40:60:8b 10.100.0.14
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.500 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:03 compute-0 ovn_controller[97779]: 2025-11-25T11:01:03Z|00068|binding|INFO|Setting lport 4b99e8ff-a6c5-4046-9654-a09c32b9646b ovn-installed in OVS
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.519 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.520 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.523 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:60:8b 10.100.0.14'], port_security=['fa:16:3e:40:60:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7a2ec38f-d9cc-45cf-8338-fe982e25d7e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5ab8414-3551-47a1-933c-4988048192d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81c1c4c8c73c403d8d6b430858c11434', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aadc1789-9558-4b2d-a74d-b9afb6d40937', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e75e38da-6f2b-44a6-a44c-e2f80017c82d, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=4b99e8ff-a6c5-4046-9654-a09c32b9646b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.524 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 4b99e8ff-a6c5-4046-9654-a09c32b9646b in datapath c5ab8414-3551-47a1-933c-4988048192d1 bound to our chassis
Nov 25 11:01:03 compute-0 ovn_controller[97779]: 2025-11-25T11:01:03Z|00069|binding|INFO|Setting lport 4b99e8ff-a6c5-4046-9654-a09c32b9646b up in Southbound
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.527 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5ab8414-3551-47a1-933c-4988048192d1
Nov 25 11:01:03 compute-0 systemd-machined[155706]: New machine qemu-6-instance-00000006.
Nov 25 11:01:03 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.541 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[967740ff-7991-4fa3-9b85-d43eb54cc232]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.542 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc5ab8414-31 in ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.544 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc5ab8414-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.544 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[c84c5c02-c620-4333-881b-61ce84386899]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.545 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f11270-7497-4a67-8468-4842bf6a0d69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 systemd-udevd[252336]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.558 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[de718054-8625-4b45-ac66-7c06ba9fcffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 NetworkManager[56317]: <info>  [1764068463.5759] device (tap4b99e8ff-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:01:03 compute-0 NetworkManager[56317]: <info>  [1764068463.5801] device (tap4b99e8ff-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.585 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[d3106403-7f8b-49a5-961f-560c6b8d4444]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.617 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[79274435-f106-4442-83cb-3da4113165ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 systemd-udevd[252339]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.624 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[52b6dc03-b452-4b59-a82d-7cc82d86f197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 NetworkManager[56317]: <info>  [1764068463.6276] manager: (tapc5ab8414-30): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.658 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[8a223735-c42e-4fa3-ab71-388a981ad5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.661 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc2732e-3ba0-4888-af1a-260127676480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 NetworkManager[56317]: <info>  [1764068463.6868] device (tapc5ab8414-30): carrier: link connected
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.694 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[e18be623-622e-4e80-a716-be4ce7ac8517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.715 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[24371cd9-1b97-43a7-90fb-180dbd1b7a61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5ab8414-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:82:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537126, 'reachable_time': 39938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252367, 'error': None, 'target': 'ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.732 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[66c4c120-49c0-44df-ad00-9040aedddade]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:82a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537126, 'tstamp': 537126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252368, 'error': None, 'target': 'ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.749 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[d9da589b-904c-4202-b7b2-2eab5dec508e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5ab8414-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:82:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537126, 'reachable_time': 39938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252369, 'error': None, 'target': 'ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.784 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcd6e7e-e430-4adb-9ea2-6f1fb0212c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.857 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[589b779f-aae5-4b95-adb5-8282ff006454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.859 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5ab8414-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.860 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.863 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5ab8414-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.865 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:03 compute-0 NetworkManager[56317]: <info>  [1764068463.8668] manager: (tapc5ab8414-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 25 11:01:03 compute-0 kernel: tapc5ab8414-30: entered promiscuous mode
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.871 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5ab8414-30, col_values=(('external_ids', {'iface-id': 'aead71f5-23d3-478c-9967-5aead033d6fc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:03 compute-0 ovn_controller[97779]: 2025-11-25T11:01:03Z|00070|binding|INFO|Releasing lport aead71f5-23d3-478c-9967-5aead033d6fc from this chassis (sb_readonly=0)
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.873 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.877 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c5ab8414-3551-47a1-933c-4988048192d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c5ab8414-3551-47a1-933c-4988048192d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.878 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[076d3a60-98e4-43fa-ac9e-6e9a6a19a734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.879 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-c5ab8414-3551-47a1-933c-4988048192d1
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/c5ab8414-3551-47a1-933c-4988048192d1.pid.haproxy
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID c5ab8414-3551-47a1-933c-4988048192d1
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:01:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:03.880 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1', 'env', 'PROCESS_TAG=haproxy-c5ab8414-3551-47a1-933c-4988048192d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c5ab8414-3551-47a1-933c-4988048192d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:01:03 compute-0 nova_compute[189381]: 2025-11-25 11:01:03.889 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:04 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 11:01:04 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.238 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068464.2382045, 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.239 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] VM Started (Lifecycle Event)
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.258 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:04 compute-0 podman[252386]: 2025-11-25 11:01:04.261590726 +0000 UTC m=+0.087782312 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.265 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068464.2383487, 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.265 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] VM Paused (Lifecycle Event)
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.287 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:04 compute-0 podman[252390]: 2025-11-25 11:01:04.289982828 +0000 UTC m=+0.116109792 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.292 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:04 compute-0 nova_compute[189381]: 2025-11-25 11:01:04.309 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:04 compute-0 podman[252456]: 2025-11-25 11:01:04.331438358 +0000 UTC m=+0.070229274 container create 0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 11:01:04 compute-0 podman[252456]: 2025-11-25 11:01:04.292742818 +0000 UTC m=+0.031533754 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:01:04 compute-0 systemd[1]: Started libpod-conmon-0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58.scope.
Nov 25 11:01:04 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:01:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfb9421dcc7e8521861421a2788282bf13f21f93af822c34e1b3582ebb335d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:04 compute-0 podman[252456]: 2025-11-25 11:01:04.520378206 +0000 UTC m=+0.259169142 container init 0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:01:04 compute-0 podman[252456]: 2025-11-25 11:01:04.53225346 +0000 UTC m=+0.271044376 container start 0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 11:01:04 compute-0 neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1[252474]: [NOTICE]   (252478) : New worker (252480) forked
Nov 25 11:01:04 compute-0 neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1[252474]: [NOTICE]   (252478) : Loading success.
Nov 25 11:01:06 compute-0 nova_compute[189381]: 2025-11-25 11:01:06.242 189385 DEBUG nova.network.neutron [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Updated VIF entry in instance network info cache for port 4b99e8ff-a6c5-4046-9654-a09c32b9646b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:01:06 compute-0 nova_compute[189381]: 2025-11-25 11:01:06.243 189385 DEBUG nova.network.neutron [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Updating instance_info_cache with network_info: [{"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:06 compute-0 nova_compute[189381]: 2025-11-25 11:01:06.266 189385 DEBUG oslo_concurrency.lockutils [req-69383651-c5a2-4034-a587-77afa263862d req-705e5051-5dab-4a70-95ef-a3d4e6ef7714 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:07 compute-0 nova_compute[189381]: 2025-11-25 11:01:07.202 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:07 compute-0 podman[252489]: 2025-11-25 11:01:07.964583378 +0000 UTC m=+0.073318212 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, name=ubi9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.234 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1623 Content-Type: application/json Date: Tue, 25 Nov 2025 11:01:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-155c87b7-4b09-4846-a606-63bc39e7dc9e x-openstack-request-id: req-155c87b7-4b09-4846-a606-63bc39e7dc9e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.234 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2", "name": "tempest-ServerAddressesTestJSON-server-411749896", "status": "BUILD", "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "user_id": "b821e5c3d70f4dc78d5de14f250d8590", "metadata": {}, "hostId": "3adb950c22157e48020e9628e55e6134ea00a2acadd16bda6070781c", "image": {"id": "b388f0fb-bd04-4296-928b-44c706e0493e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b388f0fb-bd04-4296-928b-44c706e0493e"}]}, "flavor": {"id": "b7c0626e-febc-4083-b621-6f5ee0740a18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b7c0626e-febc-4083-b621-6f5ee0740a18"}]}, "created": "2025-11-25T11:00:51Z", "updated": "2025-11-25T11:00:56Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.234 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 used request id req-155c87b7-4b09-4846-a606-63bc39e7dc9e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.236 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7a2ec38f-d9cc-45cf-8338-fe982e25d7e2', 'name': 'tempest-ServerAddressesTestJSON-server-411749896', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': '81c1c4c8c73c403d8d6b430858c11434', 'user_id': 'b821e5c3d70f4dc78d5de14f250d8590', 'hostId': '3adb950c22157e48020e9628e55e6134ea00a2acadd16bda6070781c', 'status': 'paused', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.237 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:01:08.237349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.241 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 / tap4b99e8ff-a6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.242 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.243 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.243 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.243 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.243 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.244 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.244 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.244 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.244 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.244 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.244 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:01:08.243587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.245 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:01:08.244684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.267 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.267 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2: ceilometer.compute.pollsters.NoVolumeException
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.267 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-411749896>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-411749896>]
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.269 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T11:01:08.268239) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:01:08.269072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.270 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.271 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:01:08.270275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.271 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.271 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.271 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.272 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.272 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:01:08.271478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.273 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/cpu volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.273 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:01:08.273063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.274 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.274 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.275 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:01:08.274365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:01:08.275445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.289 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.290 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.290 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.290 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:01:08.290924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.327 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.328 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.329 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:01:08.328858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:01:08.331166) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.331 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.331 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:01:08.332701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.332 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.333 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.333 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.333 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.334 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.334 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:01:08.333993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.335 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.335 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.336 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.336 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/power.state volume: 3 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.337 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.337 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.338 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:01:08.335401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:01:08.336747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:01:08.337732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:01:08.339009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.340 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.340 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:01:08.340059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.341 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-411749896>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerAddressesTestJSON-server-411749896>]
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T11:01:08.341472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:01:08.342528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.343 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.344 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:01:08.344006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.345 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.346 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:01:08.345252) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:01:08.346214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.348 14 DEBUG ceilometer.compute.pollsters [-] 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:01:08.347912) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:01:08.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.393 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.649 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.649 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.683 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.779 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.780 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.802 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.802 189385 INFO nova.compute.claims [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:01:08 compute-0 nova_compute[189381]: 2025-11-25 11:01:08.989 189385 DEBUG nova.compute.provider_tree [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.014 189385 DEBUG nova.scheduler.client.report [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.036 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.037 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.109 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.110 189385 DEBUG nova.network.neutron [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.129 189385 INFO nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.147 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.261 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.262 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.264 189385 INFO nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Creating image(s)
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.265 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.265 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.266 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.280 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.342 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.343 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.343 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.355 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.417 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.419 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.466 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.467 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.468 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.526 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.527 189385 DEBUG nova.virt.disk.api [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Checking if we can resize image /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.530 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.599 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.600 189385 DEBUG nova.virt.disk.api [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Cannot resize image /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.600 189385 DEBUG nova.objects.instance [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'migration_context' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.615 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.616 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Ensure instance console log exists: /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.616 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.617 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.617 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:09 compute-0 nova_compute[189381]: 2025-11-25 11:01:09.684 189385 DEBUG nova.policy [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '28101b622acc41c3aa3608e548b7ef96', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '826c484414ce4e89a03cf37f2359f956', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:01:10 compute-0 nova_compute[189381]: 2025-11-25 11:01:10.883 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:10 compute-0 nova_compute[189381]: 2025-11-25 11:01:10.884 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:10 compute-0 nova_compute[189381]: 2025-11-25 11:01:10.903 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.035 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.036 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.043 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.043 189385 INFO nova.compute.claims [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.612 189385 DEBUG nova.compute.provider_tree [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.626 189385 DEBUG nova.scheduler.client.report [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.650 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.651 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.710 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.711 189385 DEBUG nova.network.neutron [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.731 189385 INFO nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.749 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.896 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.898 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.898 189385 INFO nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Creating image(s)
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.899 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "/var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.900 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "/var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.901 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "/var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.920 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.980 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.981 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.982 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:11 compute-0 nova_compute[189381]: 2025-11-25 11:01:11.992 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.060 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.062 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.101 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.102 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.103 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.167 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.169 189385 DEBUG nova.virt.disk.api [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Checking if we can resize image /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.170 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.205 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.230 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.232 189385 DEBUG nova.virt.disk.api [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Cannot resize image /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.232 189385 DEBUG nova.objects.instance [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lazy-loading 'migration_context' on Instance uuid 388d7cfb-c9e5-413a-9649-93e137294b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.243 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.244 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Ensure instance console log exists: /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.245 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.245 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.245 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.282 189385 DEBUG nova.policy [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2c4b9fe3a6ed4ac6a15a5f331dbe9842', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aab9dbacd4e342dc8dba92c598ab985b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:01:12 compute-0 nova_compute[189381]: 2025-11-25 11:01:12.434 189385 DEBUG nova.network.neutron [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Successfully created port: 5a6cf231-3edc-4338-bb8e-74f0f7e6672d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:01:13 compute-0 nova_compute[189381]: 2025-11-25 11:01:13.395 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:13 compute-0 podman[252538]: 2025-11-25 11:01:13.93696139 +0000 UTC m=+0.053279573 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.155 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.155 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.245 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.380 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.381 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.390 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.390 189385 INFO nova.compute.claims [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.406 189385 DEBUG nova.compute.manager [req-682a84d1-f5b9-49ad-8a8f-0f7bad55e983 req-22fec304-7e9a-46d5-bdaa-554e0f5d88a5 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Received event network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.406 189385 DEBUG oslo_concurrency.lockutils [req-682a84d1-f5b9-49ad-8a8f-0f7bad55e983 req-22fec304-7e9a-46d5-bdaa-554e0f5d88a5 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.407 189385 DEBUG oslo_concurrency.lockutils [req-682a84d1-f5b9-49ad-8a8f-0f7bad55e983 req-22fec304-7e9a-46d5-bdaa-554e0f5d88a5 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.407 189385 DEBUG oslo_concurrency.lockutils [req-682a84d1-f5b9-49ad-8a8f-0f7bad55e983 req-22fec304-7e9a-46d5-bdaa-554e0f5d88a5 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.408 189385 DEBUG nova.compute.manager [req-682a84d1-f5b9-49ad-8a8f-0f7bad55e983 req-22fec304-7e9a-46d5-bdaa-554e0f5d88a5 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Processing event network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.409 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.414 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068474.4140267, 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.415 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] VM Resumed (Lifecycle Event)
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.417 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.423 189385 INFO nova.virt.libvirt.driver [-] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Instance spawned successfully.
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.423 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.453 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.462 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.467 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.468 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.468 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.469 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.469 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.470 189385 DEBUG nova.virt.libvirt.driver [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.499 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.544 189385 DEBUG nova.network.neutron [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Successfully created port: c0d318cc-f546-4bbc-aebc-f0c185dff8aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.576 189385 INFO nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Took 18.51 seconds to spawn the instance on the hypervisor.
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.576 189385 DEBUG nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.655 189385 INFO nova.compute.manager [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Took 19.00 seconds to build instance.
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.672 189385 DEBUG nova.compute.provider_tree [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.720 189385 DEBUG nova.scheduler.client.report [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.754 189385 DEBUG oslo_concurrency.lockutils [None req-e8069159-b391-4a10-b5b1-520709355500 b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.249s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.759 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.760 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.792 189385 DEBUG nova.network.neutron [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Successfully updated port: 5a6cf231-3edc-4338-bb8e-74f0f7e6672d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.821 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.821 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquired lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.821 189385 DEBUG nova.network.neutron [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.837 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.838 189385 DEBUG nova.network.neutron [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.880 189385 INFO nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:01:14 compute-0 nova_compute[189381]: 2025-11-25 11:01:14.911 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.096 189385 DEBUG nova.network.neutron [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.121 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.123 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.123 189385 INFO nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Creating image(s)
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.124 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "/var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.124 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "/var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.125 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "/var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.138 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.178 189385 DEBUG nova.policy [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dcfeee3b6d344d059499b78710287a87', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '04532f8fff61471495a338caf8c9670e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.201 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.202 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.203 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.214 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.276 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.277 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.322 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.323 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.323 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.379 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.380 189385 DEBUG nova.virt.disk.api [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Checking if we can resize image /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.380 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.437 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.438 189385 DEBUG nova.virt.disk.api [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Cannot resize image /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.438 189385 DEBUG nova.objects.instance [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lazy-loading 'migration_context' on Instance uuid 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.456 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.456 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Ensure instance console log exists: /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.457 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.457 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:15 compute-0 nova_compute[189381]: 2025-11-25 11:01:15.458 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:16 compute-0 podman[252573]: 2025-11-25 11:01:16.947140958 +0000 UTC m=+0.061813261 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:01:16 compute-0 podman[252572]: 2025-11-25 11:01:16.962928935 +0000 UTC m=+0.079401520 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git)
Nov 25 11:01:17 compute-0 nova_compute[189381]: 2025-11-25 11:01:17.208 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:18 compute-0 nova_compute[189381]: 2025-11-25 11:01:18.397 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.140 189385 DEBUG nova.network.neutron [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updating instance_info_cache with network_info: [{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.519 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Releasing lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.521 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance network_info: |[{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.524 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Start _get_guest_xml network_info=[{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.531 189385 WARNING nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.537 189385 DEBUG nova.virt.libvirt.host [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.538 189385 DEBUG nova.virt.libvirt.host [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.543 189385 DEBUG nova.virt.libvirt.host [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.544 189385 DEBUG nova.virt.libvirt.host [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.545 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.545 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.546 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.546 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.546 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.547 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.547 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.548 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.548 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.548 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.549 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.549 189385 DEBUG nova.virt.hardware [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.553 189385 DEBUG nova.virt.libvirt.vif [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-529149042',display_name='tempest-ServerActionsTestJSON-server-529149042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-529149042',id=7,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzWJb9N1xKHRqheyAvQfzLJN/1EXZRkwEZB48VX8Av1lPssKsugB7RXaWiGMq0S+O13B7XTAT58mD2UKEKFp3RMSIDEcXXZEClMlcSxvJw62JrrIVelFsyCSZ1uD8LCvQ==',key_name='tempest-keypair-689374724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='826c484414ce4e89a03cf37f2359f956',ramdisk_id='',reservation_id='r-g88p5309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-62183409',owner_user_name='tempest-ServerActionsTestJSON-62183409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:01:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28101b622acc41c3aa3608e548b7ef96',uuid=c4d7af36-620f-46df-8347-4eaeed7856c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.554 189385 DEBUG nova.network.os_vif_util [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converting VIF {"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.554 189385 DEBUG nova.network.os_vif_util [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.555 189385 DEBUG nova.objects.instance [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'pci_devices' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.610 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <uuid>c4d7af36-620f-46df-8347-4eaeed7856c6</uuid>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <name>instance-00000007</name>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <nova:name>tempest-ServerActionsTestJSON-server-529149042</nova:name>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:01:19</nova:creationTime>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:user uuid="28101b622acc41c3aa3608e548b7ef96">tempest-ServerActionsTestJSON-62183409-project-member</nova:user>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:project uuid="826c484414ce4e89a03cf37f2359f956">tempest-ServerActionsTestJSON-62183409</nova:project>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         <nova:port uuid="5a6cf231-3edc-4338-bb8e-74f0f7e6672d">
Nov 25 11:01:19 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <system>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <entry name="serial">c4d7af36-620f-46df-8347-4eaeed7856c6</entry>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <entry name="uuid">c4d7af36-620f-46df-8347-4eaeed7856c6</entry>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </system>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <os>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   </os>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <features>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   </features>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.config"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:82:ff:2a"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <target dev="tap5a6cf231-3e"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/console.log" append="off"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <video>
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </video>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:01:19 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:01:19 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:01:19 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:01:19 compute-0 nova_compute[189381]: </domain>
Nov 25 11:01:19 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.611 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Preparing to wait for external event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.611 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.612 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.612 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.613 189385 DEBUG nova.virt.libvirt.vif [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-529149042',display_name='tempest-ServerActionsTestJSON-server-529149042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-529149042',id=7,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzWJb9N1xKHRqheyAvQfzLJN/1EXZRkwEZB48VX8Av1lPssKsugB7RXaWiGMq0S+O13B7XTAT58mD2UKEKFp3RMSIDEcXXZEClMlcSxvJw62JrrIVelFsyCSZ1uD8LCvQ==',key_name='tempest-keypair-689374724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='826c484414ce4e89a03cf37f2359f956',ramdisk_id='',reservation_id='r-g88p5309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-62183409',owner_user_name='tempest-ServerActionsTestJSON-62183409-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:01:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28101b622acc41c3aa3608e548b7ef96',uuid=c4d7af36-620f-46df-8347-4eaeed7856c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.614 189385 DEBUG nova.network.os_vif_util [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converting VIF {"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.615 189385 DEBUG nova.network.os_vif_util [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.615 189385 DEBUG os_vif [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.616 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.616 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.617 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.620 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.620 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a6cf231-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.621 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a6cf231-3e, col_values=(('external_ids', {'iface-id': '5a6cf231-3edc-4338-bb8e-74f0f7e6672d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:ff:2a', 'vm-uuid': 'c4d7af36-620f-46df-8347-4eaeed7856c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.625 189385 DEBUG nova.network.neutron [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Successfully updated port: c0d318cc-f546-4bbc-aebc-f0c185dff8aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:01:19 compute-0 NetworkManager[56317]: <info>  [1764068479.6271] manager: (tap5a6cf231-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.627 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.630 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.631 189385 INFO os_vif [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e')
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.648 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.649 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.649 189385 DEBUG nova.network.neutron [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.690 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.691 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.691 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] No VIF found with MAC fa:16:3e:82:ff:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.692 189385 INFO nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Using config drive
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.877 189385 DEBUG nova.network.neutron [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:01:19 compute-0 nova_compute[189381]: 2025-11-25 11:01:19.883 189385 DEBUG nova.network.neutron [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Successfully created port: 2709535c-6a90-41ec-b6cf-556a36171fb4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.034 189385 DEBUG nova.compute.manager [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Received event network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.035 189385 DEBUG oslo_concurrency.lockutils [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.035 189385 DEBUG oslo_concurrency.lockutils [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.035 189385 DEBUG oslo_concurrency.lockutils [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.035 189385 DEBUG nova.compute.manager [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] No waiting events found dispatching network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.035 189385 WARNING nova.compute.manager [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Received unexpected event network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b for instance with vm_state active and task_state None.
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.036 189385 DEBUG nova.compute.manager [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-changed-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.036 189385 DEBUG nova.compute.manager [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Refreshing instance network info cache due to event network-changed-5a6cf231-3edc-4338-bb8e-74f0f7e6672d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.036 189385 DEBUG oslo_concurrency.lockutils [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.036 189385 DEBUG oslo_concurrency.lockutils [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.036 189385 DEBUG nova.network.neutron [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Refreshing network info cache for port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.691 189385 INFO nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Creating config drive at /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.config
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.698 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu8tp6zsc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.824 189385 DEBUG oslo_concurrency.processutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu8tp6zsc" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:20 compute-0 kernel: tap5a6cf231-3e: entered promiscuous mode
Nov 25 11:01:20 compute-0 NetworkManager[56317]: <info>  [1764068480.9127] manager: (tap5a6cf231-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 25 11:01:20 compute-0 ovn_controller[97779]: 2025-11-25T11:01:20Z|00071|binding|INFO|Claiming lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d for this chassis.
Nov 25 11:01:20 compute-0 ovn_controller[97779]: 2025-11-25T11:01:20Z|00072|binding|INFO|5a6cf231-3edc-4338-bb8e-74f0f7e6672d: Claiming fa:16:3e:82:ff:2a 10.100.0.6
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.913 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.933 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.936 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:20 compute-0 ovn_controller[97779]: 2025-11-25T11:01:20Z|00073|binding|INFO|Setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d ovn-installed in OVS
Nov 25 11:01:20 compute-0 nova_compute[189381]: 2025-11-25 11:01:20.939 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.948 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:2a 10.100.0.6'], port_security=['fa:16:3e:82:ff:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4d7af36-620f-46df-8347-4eaeed7856c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '826c484414ce4e89a03cf37f2359f956', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f94f5308-9585-46c9-858a-5bfd8b44a26c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e6d622-8d17-4306-9b9d-6c16ad078515, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=5a6cf231-3edc-4338-bb8e-74f0f7e6672d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.950 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d in datapath 23ecff9c-5f66-4ace-9c23-23cc4a7533de bound to our chassis
Nov 25 11:01:20 compute-0 ovn_controller[97779]: 2025-11-25T11:01:20Z|00074|binding|INFO|Setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d up in Southbound
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.954 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 23ecff9c-5f66-4ace-9c23-23cc4a7533de
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.968 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2583f4d0-acfe-4db8-81d0-62859bcacabc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.970 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap23ecff9c-51 in ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:01:20 compute-0 systemd-udevd[252659]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.971 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap23ecff9c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.972 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[fd35c520-7b62-42ad-9fb2-05299d4fc85f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.974 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[f2359175-87fc-4425-8439-a00c54b4822e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:20 compute-0 systemd-machined[155706]: New machine qemu-7-instance-00000007.
Nov 25 11:01:20 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 25 11:01:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:20.987 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[70c9e49a-36b3-4cb3-b257-e7980c27b1b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:20 compute-0 NetworkManager[56317]: <info>  [1764068480.9941] device (tap5a6cf231-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:01:20 compute-0 NetworkManager[56317]: <info>  [1764068480.9954] device (tap5a6cf231-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.015 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[4a94d090-7af2-47dd-b41e-65565e41317f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 podman[252628]: 2025-11-25 11:01:21.017264518 +0000 UTC m=+0.125213806 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 25 11:01:21 compute-0 podman[252626]: 2025-11-25 11:01:21.048243594 +0000 UTC m=+0.160003382 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.050 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[22ea2a4e-5141-4e34-8b14-41391ae188d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 systemd-udevd[252672]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.058 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce9c01e-681e-4650-8de0-33b980d2da3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 NetworkManager[56317]: <info>  [1764068481.0595] manager: (tap23ecff9c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.103 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[be09f2a4-e442-4a67-91d4-8e425ad15b5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.107 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[04d2dc7c-d8c3-43a1-9a21-eca9b315bf30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 NetworkManager[56317]: <info>  [1764068481.1432] device (tap23ecff9c-50): carrier: link connected
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.155 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[1c365fd6-87c3-4752-b8aa-53c5f2645c52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.174 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[d05e54a6-ec3f-4cd7-a149-d399c12d4efa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ecff9c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:aa:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538872, 'reachable_time': 22600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252709, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.187 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7be031-2ddf-4c47-8299-5dc73e37ff6d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:aa0e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538872, 'tstamp': 538872}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252710, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.202 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e222027b-1b3b-4f45-8bb7-441f864bdb3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ecff9c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:aa:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538872, 'reachable_time': 22600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252711, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.231 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[34e70e5a-fe43-4c01-b673-5a37b672a965]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.282 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[45b51a5d-bbe8-498f-be8a-dce3035bb60d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.283 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ecff9c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.284 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.284 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23ecff9c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:21 compute-0 NetworkManager[56317]: <info>  [1764068481.2865] manager: (tap23ecff9c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.286 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 kernel: tap23ecff9c-50: entered promiscuous mode
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.291 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.292 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap23ecff9c-50, col_values=(('external_ids', {'iface-id': 'f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.294 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.295 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.296 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/23ecff9c-5f66-4ace-9c23-23cc4a7533de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/23ecff9c-5f66-4ace-9c23-23cc4a7533de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:01:21 compute-0 ovn_controller[97779]: 2025-11-25T11:01:21Z|00075|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.297 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[70f9d986-89a0-4909-a69b-06c52ca75fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.298 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-23ecff9c-5f66-4ace-9c23-23cc4a7533de
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/23ecff9c-5f66-4ace-9c23-23cc4a7533de.pid.haproxy
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID 23ecff9c-5f66-4ace-9c23-23cc4a7533de
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.299 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'env', 'PROCESS_TAG=haproxy-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/23ecff9c-5f66-4ace-9c23-23cc4a7533de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.310 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.452 189385 DEBUG nova.network.neutron [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.457 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.457 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.457 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.457 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.458 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.459 189385 INFO nova.compute.manager [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Terminating instance
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.460 189385 DEBUG nova.compute.manager [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:01:21 compute-0 kernel: tap4b99e8ff-a6 (unregistering): left promiscuous mode
Nov 25 11:01:21 compute-0 NetworkManager[56317]: <info>  [1764068481.5019] device (tap4b99e8ff-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.509 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 ovn_controller[97779]: 2025-11-25T11:01:21Z|00076|binding|INFO|Releasing lport 4b99e8ff-a6c5-4046-9654-a09c32b9646b from this chassis (sb_readonly=0)
Nov 25 11:01:21 compute-0 ovn_controller[97779]: 2025-11-25T11:01:21Z|00077|binding|INFO|Setting lport 4b99e8ff-a6c5-4046-9654-a09c32b9646b down in Southbound
Nov 25 11:01:21 compute-0 ovn_controller[97779]: 2025-11-25T11:01:21Z|00078|binding|INFO|Removing iface tap4b99e8ff-a6 ovn-installed in OVS
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.515 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.531 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 25 11:01:21 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 8.002s CPU time.
Nov 25 11:01:21 compute-0 systemd-machined[155706]: Machine qemu-6-instance-00000006 terminated.
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.569 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.570 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Instance network_info: |[{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.573 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Start _get_guest_xml network_info=[{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.580 189385 WARNING nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.588 189385 DEBUG nova.virt.libvirt.host [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.590 189385 DEBUG nova.virt.libvirt.host [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.593 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:40:60:8b 10.100.0.14'], port_security=['fa:16:3e:40:60:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7a2ec38f-d9cc-45cf-8338-fe982e25d7e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5ab8414-3551-47a1-933c-4988048192d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '81c1c4c8c73c403d8d6b430858c11434', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aadc1789-9558-4b2d-a74d-b9afb6d40937', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e75e38da-6f2b-44a6-a44c-e2f80017c82d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=4b99e8ff-a6c5-4046-9654-a09c32b9646b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.595 189385 DEBUG nova.virt.libvirt.host [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.596 189385 DEBUG nova.virt.libvirt.host [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.596 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.597 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.598 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.599 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.599 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.600 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.600 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.600 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.601 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.601 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.602 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.602 189385 DEBUG nova.virt.hardware [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.607 189385 DEBUG nova.virt.libvirt.vif [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2107609661',display_name='tempest-AttachInterfacesUnderV243Test-server-2107609661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2107609661',id=8,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtO0HjgWiM0JSycO6jGf2/nAZhrR5B9RHoKEiCWRqTQ2ZEGJWpoGM2BnIEFm5FDR+Uhh3GbUmTBAMlbuu2npur0QUHXfwQUDwLTXRSY2Cr00b6N3oiGImBs0AlIIVa26g==',key_name='tempest-keypair-223894159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aab9dbacd4e342dc8dba92c598ab985b',ramdisk_id='',reservation_id='r-ufa2json',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2133702226',owner_user_name='tempest-AttachInterfacesUnderV243Test-2133702226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:01:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2c4b9fe3a6ed4ac6a15a5f331dbe9842',uuid=388d7cfb-c9e5-413a-9649-93e137294b38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.608 189385 DEBUG nova.network.os_vif_util [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Converting VIF {"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.609 189385 DEBUG nova.network.os_vif_util [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:e1:52,bridge_name='br-int',has_traffic_filtering=True,id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa,network=Network(2fd87850-667e-4c51-ba0e-fa79b8cba493),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0d318cc-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.610 189385 DEBUG nova.objects.instance [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lazy-loading 'pci_devices' on Instance uuid 388d7cfb-c9e5-413a-9649-93e137294b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.616 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068481.615897, c4d7af36-620f-46df-8347-4eaeed7856c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.617 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] VM Started (Lifecycle Event)
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.622 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <uuid>388d7cfb-c9e5-413a-9649-93e137294b38</uuid>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <name>instance-00000008</name>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-2107609661</nova:name>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:01:21</nova:creationTime>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:user uuid="2c4b9fe3a6ed4ac6a15a5f331dbe9842">tempest-AttachInterfacesUnderV243Test-2133702226-project-member</nova:user>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:project uuid="aab9dbacd4e342dc8dba92c598ab985b">tempest-AttachInterfacesUnderV243Test-2133702226</nova:project>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         <nova:port uuid="c0d318cc-f546-4bbc-aebc-f0c185dff8aa">
Nov 25 11:01:21 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <system>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <entry name="serial">388d7cfb-c9e5-413a-9649-93e137294b38</entry>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <entry name="uuid">388d7cfb-c9e5-413a-9649-93e137294b38</entry>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </system>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <os>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   </os>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <features>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   </features>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk.config"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:92:e1:52"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <target dev="tapc0d318cc-f5"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/console.log" append="off"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <video>
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </video>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:01:21 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:01:21 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:01:21 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:01:21 compute-0 nova_compute[189381]: </domain>
Nov 25 11:01:21 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.631 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Preparing to wait for external event network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.632 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.632 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.633 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.633 189385 DEBUG nova.virt.libvirt.vif [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2107609661',display_name='tempest-AttachInterfacesUnderV243Test-server-2107609661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2107609661',id=8,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtO0HjgWiM0JSycO6jGf2/nAZhrR5B9RHoKEiCWRqTQ2ZEGJWpoGM2BnIEFm5FDR+Uhh3GbUmTBAMlbuu2npur0QUHXfwQUDwLTXRSY2Cr00b6N3oiGImBs0AlIIVa26g==',key_name='tempest-keypair-223894159',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aab9dbacd4e342dc8dba92c598ab985b',ramdisk_id='',reservation_id='r-ufa2json',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-2133702226',owner_user_name='tempest-AttachInterfacesUnderV243Test-2133702226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:01:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2c4b9fe3a6ed4ac6a15a5f331dbe9842',uuid=388d7cfb-c9e5-413a-9649-93e137294b38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.633 189385 DEBUG nova.network.os_vif_util [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Converting VIF {"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.634 189385 DEBUG nova.network.os_vif_util [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:e1:52,bridge_name='br-int',has_traffic_filtering=True,id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa,network=Network(2fd87850-667e-4c51-ba0e-fa79b8cba493),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0d318cc-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.636 189385 DEBUG os_vif [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:e1:52,bridge_name='br-int',has_traffic_filtering=True,id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa,network=Network(2fd87850-667e-4c51-ba0e-fa79b8cba493),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0d318cc-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.637 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.638 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.639 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.642 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.645 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.646 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0d318cc-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.647 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0d318cc-f5, col_values=(('external_ids', {'iface-id': 'c0d318cc-f546-4bbc-aebc-f0c185dff8aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:92:e1:52', 'vm-uuid': '388d7cfb-c9e5-413a-9649-93e137294b38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:21 compute-0 NetworkManager[56317]: <info>  [1764068481.6509] manager: (tapc0d318cc-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.650 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.654 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.657 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068481.6160543, c4d7af36-620f-46df-8347-4eaeed7856c6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.658 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] VM Paused (Lifecycle Event)
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.664 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.665 189385 INFO os_vif [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:e1:52,bridge_name='br-int',has_traffic_filtering=True,id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa,network=Network(2fd87850-667e-4c51-ba0e-fa79b8cba493),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0d318cc-f5')
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.684 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.693 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.716 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.736 189385 INFO nova.virt.libvirt.driver [-] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Instance destroyed successfully.
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.737 189385 DEBUG nova.objects.instance [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lazy-loading 'resources' on Instance uuid 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:21 compute-0 podman[252749]: 2025-11-25 11:01:21.75113198 +0000 UTC m=+0.082500909 container create 8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.757 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.758 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.758 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] No VIF found with MAC fa:16:3e:92:e1:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.758 189385 INFO nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Using config drive
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.762 189385 DEBUG nova.virt.libvirt.vif [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-411749896',display_name='tempest-ServerAddressesTestJSON-server-411749896',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-411749896',id=6,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:01:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='81c1c4c8c73c403d8d6b430858c11434',ramdisk_id='',reservation_id='r-739xiapo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-10314999',owner_user_name='tempest-ServerAddressesTestJSON-10314999-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:01:14Z,user_data=None,user_id='b821e5c3d70f4dc78d5de14f250d8590',uuid=7a2ec38f-d9cc-45cf-8338-fe982e25d7e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.762 189385 DEBUG nova.network.os_vif_util [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Converting VIF {"id": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "address": "fa:16:3e:40:60:8b", "network": {"id": "c5ab8414-3551-47a1-933c-4988048192d1", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-275586023-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "81c1c4c8c73c403d8d6b430858c11434", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4b99e8ff-a6", "ovs_interfaceid": "4b99e8ff-a6c5-4046-9654-a09c32b9646b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.763 189385 DEBUG nova.network.os_vif_util [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:40:60:8b,bridge_name='br-int',has_traffic_filtering=True,id=4b99e8ff-a6c5-4046-9654-a09c32b9646b,network=Network(c5ab8414-3551-47a1-933c-4988048192d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b99e8ff-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.763 189385 DEBUG os_vif [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:60:8b,bridge_name='br-int',has_traffic_filtering=True,id=4b99e8ff-a6c5-4046-9654-a09c32b9646b,network=Network(c5ab8414-3551-47a1-933c-4988048192d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b99e8ff-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.765 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.765 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b99e8ff-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.767 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.769 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.773 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.775 189385 INFO os_vif [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:40:60:8b,bridge_name='br-int',has_traffic_filtering=True,id=4b99e8ff-a6c5-4046-9654-a09c32b9646b,network=Network(c5ab8414-3551-47a1-933c-4988048192d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4b99e8ff-a6')
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.775 189385 INFO nova.virt.libvirt.driver [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Deleting instance files /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2_del
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.776 189385 INFO nova.virt.libvirt.driver [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Deletion of /var/lib/nova/instances/7a2ec38f-d9cc-45cf-8338-fe982e25d7e2_del complete
Nov 25 11:01:21 compute-0 podman[252749]: 2025-11-25 11:01:21.704041197 +0000 UTC m=+0.035410146 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:01:21 compute-0 systemd[1]: Started libpod-conmon-8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413.scope.
Nov 25 11:01:21 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b09e34b21b152b6dfb102b0e2bd69f6ad690321fb7fa66d4e6c98c754c109a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:21 compute-0 podman[252749]: 2025-11-25 11:01:21.855205552 +0000 UTC m=+0.186574501 container init 8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:01:21 compute-0 podman[252749]: 2025-11-25 11:01:21.863603585 +0000 UTC m=+0.194972514 container start 8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 25 11:01:21 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[252783]: [NOTICE]   (252787) : New worker (252789) forked
Nov 25 11:01:21 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[252783]: [NOTICE]   (252787) : Loading success.
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.924 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 4b99e8ff-a6c5-4046-9654-a09c32b9646b in datapath c5ab8414-3551-47a1-933c-4988048192d1 unbound from our chassis
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.926 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c5ab8414-3551-47a1-933c-4988048192d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.927 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7f322798-7ec6-4f4b-84cf-d05b5273ddad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:21.928 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1 namespace which is not needed anymore
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.965 189385 INFO nova.compute.manager [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Took 0.50 seconds to destroy the instance on the hypervisor.
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.967 189385 DEBUG oslo.service.loopingcall [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.967 189385 DEBUG nova.compute.manager [-] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:01:21 compute-0 nova_compute[189381]: 2025-11-25 11:01:21.968 189385 DEBUG nova.network.neutron [-] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:01:22 compute-0 neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1[252474]: [NOTICE]   (252478) : haproxy version is 2.8.14-c23fe91
Nov 25 11:01:22 compute-0 neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1[252474]: [NOTICE]   (252478) : path to executable is /usr/sbin/haproxy
Nov 25 11:01:22 compute-0 neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1[252474]: [WARNING]  (252478) : Exiting Master process...
Nov 25 11:01:22 compute-0 neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1[252474]: [ALERT]    (252478) : Current worker (252480) exited with code 143 (Terminated)
Nov 25 11:01:22 compute-0 neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1[252474]: [WARNING]  (252478) : All workers exited. Exiting... (0)
Nov 25 11:01:22 compute-0 systemd[1]: libpod-0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58.scope: Deactivated successfully.
Nov 25 11:01:22 compute-0 podman[252814]: 2025-11-25 11:01:22.099520154 +0000 UTC m=+0.047763964 container died 0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:01:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58-userdata-shm.mount: Deactivated successfully.
Nov 25 11:01:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dfb9421dcc7e8521861421a2788282bf13f21f93af822c34e1b3582ebb335d0-merged.mount: Deactivated successfully.
Nov 25 11:01:22 compute-0 podman[252814]: 2025-11-25 11:01:22.158620264 +0000 UTC m=+0.106864074 container cleanup 0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:01:22 compute-0 systemd[1]: libpod-conmon-0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58.scope: Deactivated successfully.
Nov 25 11:01:22 compute-0 podman[252842]: 2025-11-25 11:01:22.240123374 +0000 UTC m=+0.054845199 container remove 0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.249 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ab9339-509b-4ee0-84e4-11b5c95b6901]: (4, ('Tue Nov 25 11:01:22 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1 (0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58)\n0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58\nTue Nov 25 11:01:22 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1 (0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58)\n0a7843cfb29abea3af84d8ef43f3b4d8da7e1aa3d49dc617b5c55fb258444e58\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.251 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2025920e-8c75-407f-85a4-877baaaa6131]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.252 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5ab8414-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.254 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:22 compute-0 kernel: tapc5ab8414-30: left promiscuous mode
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.272 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.275 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[45e7a907-a302-448d-9324-c91d60702a3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.276 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.290 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[86a84b7e-5f5e-486d-8319-5dd26f7c6fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.291 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[a1abdcb9-e723-436a-aea9-36adbec72d30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.306 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5a8333-10c9-4114-acb3-1ef6ec958da3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537119, 'reachable_time': 42486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252860, 'error': None, 'target': 'ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.307 189385 DEBUG nova.network.neutron [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updated VIF entry in instance network info cache for port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.308 189385 DEBUG nova.network.neutron [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updating instance_info_cache with network_info: [{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:22 compute-0 systemd[1]: run-netns-ovnmeta\x2dc5ab8414\x2d3551\x2d47a1\x2d933c\x2d4988048192d1.mount: Deactivated successfully.
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.311 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c5ab8414-3551-47a1-933c-4988048192d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.311 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[390b83db-14bb-45f7-a649-064c53e6193b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.319 189385 DEBUG oslo_concurrency.lockutils [req-17b83d43-e348-4219-b931-8b2ec7c31d5d req-f28ac8ca-cc64-4f41-8f86-9dc077c81f8c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.566 189385 INFO nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Creating config drive at /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk.config
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.573 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6fkmii3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.696 189385 DEBUG oslo_concurrency.processutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6fkmii3" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:22 compute-0 kernel: tapc0d318cc-f5: entered promiscuous mode
Nov 25 11:01:22 compute-0 NetworkManager[56317]: <info>  [1764068482.7844] manager: (tapc0d318cc-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 25 11:01:22 compute-0 systemd-udevd[252696]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:01:22 compute-0 ovn_controller[97779]: 2025-11-25T11:01:22Z|00079|memory|INFO|peak resident set size grew 50% in last 2714.2 seconds, from 16128 kB to 24220 kB
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.787 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:22 compute-0 ovn_controller[97779]: 2025-11-25T11:01:22Z|00080|memory|INFO|idl-cells-OVN_Southbound:10292 idl-cells-Open_vSwitch:756 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:335 lflow-cache-entries-cache-matches:288 lflow-cache-size-KB:1423 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:640 ofctrl_installed_flow_usage-KB:466 ofctrl_sb_flow_ref_usage-KB:239
Nov 25 11:01:22 compute-0 ovn_controller[97779]: 2025-11-25T11:01:22Z|00081|binding|INFO|Claiming lport c0d318cc-f546-4bbc-aebc-f0c185dff8aa for this chassis.
Nov 25 11:01:22 compute-0 ovn_controller[97779]: 2025-11-25T11:01:22Z|00082|binding|INFO|c0d318cc-f546-4bbc-aebc-f0c185dff8aa: Claiming fa:16:3e:92:e1:52 10.100.0.14
Nov 25 11:01:22 compute-0 NetworkManager[56317]: <info>  [1764068482.8048] device (tapc0d318cc-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:01:22 compute-0 NetworkManager[56317]: <info>  [1764068482.8057] device (tapc0d318cc-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.804 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.805 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:01:22 compute-0 ovn_controller[97779]: 2025-11-25T11:01:22Z|00083|binding|INFO|Setting lport c0d318cc-f546-4bbc-aebc-f0c185dff8aa ovn-installed in OVS
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.809 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:22 compute-0 nova_compute[189381]: 2025-11-25 11:01:22.812 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:22 compute-0 ovn_controller[97779]: 2025-11-25T11:01:22Z|00084|binding|INFO|Setting lport c0d318cc-f546-4bbc-aebc-f0c185dff8aa up in Southbound
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.817 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:e1:52 10.100.0.14'], port_security=['fa:16:3e:92:e1:52 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '388d7cfb-c9e5-413a-9649-93e137294b38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aab9dbacd4e342dc8dba92c598ab985b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8604b340-fad6-470f-ae73-7809d51611ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=159a6a68-a039-46f1-aa18-f4c9b1633455, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=c0d318cc-f546-4bbc-aebc-f0c185dff8aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.819 106634 INFO neutron.agent.ovn.metadata.agent [-] Port c0d318cc-f546-4bbc-aebc-f0c185dff8aa in datapath 2fd87850-667e-4c51-ba0e-fa79b8cba493 bound to our chassis
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.821 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2fd87850-667e-4c51-ba0e-fa79b8cba493
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.838 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[30094931-5513-4457-ab9a-f783b5a659e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.840 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2fd87850-61 in ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:01:22 compute-0 systemd-machined[155706]: New machine qemu-8-instance-00000008.
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.842 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2fd87850-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.843 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[083ba383-35d2-4f97-9664-4112085ae99f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.844 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4b848a-0fda-41b7-b99e-d246af9a8580]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.857 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[8f56ef60-bd0e-49b8-8ead-95e444c3f30b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.880 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[971f97f2-7af6-49b2-87f1-036ecc03472c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.913 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3c69a4-75fb-43fe-b985-6b24976c410b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.920 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec0d6a9-9fe5-4f2d-a201-144597183edb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 NetworkManager[56317]: <info>  [1764068482.9232] manager: (tap2fd87850-60): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.960 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[cb99b2da-6ca4-4da1-b0ef-d30438bc32d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:22.963 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[1b43e9d8-60da-41db-b743-fb3d2dda8252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:22 compute-0 NetworkManager[56317]: <info>  [1764068482.9911] device (tap2fd87850-60): carrier: link connected
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.000 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[f2359bea-7e95-41d1-b19f-aab72fb8b189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.018 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5f7e88-758e-4fce-aaa9-567b46cf1f94]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fd87850-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:b6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539057, 'reachable_time': 18199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252895, 'error': None, 'target': 'ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.035 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6daa9338-dc27-4be6-87ef-20a2c6cbd619]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe62:b650'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539057, 'tstamp': 539057}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252896, 'error': None, 'target': 'ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.050 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad53ab0-2351-4631-8725-c4bc0831d719]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2fd87850-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:62:b6:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539057, 'reachable_time': 18199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252897, 'error': None, 'target': 'ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.075 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[bca199f0-b688-482d-abe6-9e7054cc8871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.140 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0de6e422-1fb4-43f9-8753-fefcd95afec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.141 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fd87850-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.142 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.142 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fd87850-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:23 compute-0 NetworkManager[56317]: <info>  [1764068483.1446] manager: (tap2fd87850-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.144 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:23 compute-0 kernel: tap2fd87850-60: entered promiscuous mode
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.147 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.152 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2fd87850-60, col_values=(('external_ids', {'iface-id': '0d385036-42e8-4835-9d5d-981ad129264d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.155 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:23 compute-0 ovn_controller[97779]: 2025-11-25T11:01:23Z|00085|binding|INFO|Releasing lport 0d385036-42e8-4835-9d5d-981ad129264d from this chassis (sb_readonly=0)
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.157 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2fd87850-667e-4c51-ba0e-fa79b8cba493.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2fd87850-667e-4c51-ba0e-fa79b8cba493.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.158 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0c67ba2a-e487-4f83-8277-2d3b775f6094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.158 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-2fd87850-667e-4c51-ba0e-fa79b8cba493
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/2fd87850-667e-4c51-ba0e-fa79b8cba493.pid.haproxy
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID 2fd87850-667e-4c51-ba0e-fa79b8cba493
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:01:23 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:23.159 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'env', 'PROCESS_TAG=haproxy-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2fd87850-667e-4c51-ba0e-fa79b8cba493.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.167 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.190 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068483.1896117, 388d7cfb-c9e5-413a-9649-93e137294b38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.191 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] VM Started (Lifecycle Event)
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.221 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.227 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068483.1897297, 388d7cfb-c9e5-413a-9649-93e137294b38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.228 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] VM Paused (Lifecycle Event)
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.253 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.259 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.262 189385 DEBUG nova.network.neutron [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Successfully updated port: 2709535c-6a90-41ec-b6cf-556a36171fb4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.285 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.292 189385 DEBUG nova.compute.manager [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.293 189385 DEBUG nova.compute.manager [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing instance network info cache due to event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.293 189385 DEBUG oslo_concurrency.lockutils [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.294 189385 DEBUG oslo_concurrency.lockutils [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.294 189385 DEBUG nova.network.neutron [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.296 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.297 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquired lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.297 189385 DEBUG nova.network.neutron [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.399 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:23 compute-0 podman[252936]: 2025-11-25 11:01:23.568351019 +0000 UTC m=+0.057995230 container create f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 25 11:01:23 compute-0 systemd[1]: Started libpod-conmon-f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a.scope.
Nov 25 11:01:23 compute-0 nova_compute[189381]: 2025-11-25 11:01:23.619 189385 DEBUG nova.network.neutron [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:01:23 compute-0 podman[252936]: 2025-11-25 11:01:23.537260569 +0000 UTC m=+0.026904800 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:01:23 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:01:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cb3769e252a90ffbff6e2e214edd3f354a1469d94cc20104ead5191d5b6a2b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:23 compute-0 podman[252936]: 2025-11-25 11:01:23.669940009 +0000 UTC m=+0.159584240 container init f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:01:23 compute-0 podman[252936]: 2025-11-25 11:01:23.677339564 +0000 UTC m=+0.166983775 container start f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:01:23 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [NOTICE]   (252955) : New worker (252957) forked
Nov 25 11:01:23 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [NOTICE]   (252955) : Loading success.
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.012 189385 DEBUG nova.network.neutron [-] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:24 compute-0 ovn_controller[97779]: 2025-11-25T11:01:24Z|00086|binding|INFO|Releasing lport 0d385036-42e8-4835-9d5d-981ad129264d from this chassis (sb_readonly=0)
Nov 25 11:01:24 compute-0 ovn_controller[97779]: 2025-11-25T11:01:24Z|00087|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.026 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.120 189385 INFO nova.compute.manager [-] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Took 2.15 seconds to deallocate network for instance.
Nov 25 11:01:24 compute-0 ovn_controller[97779]: 2025-11-25T11:01:24Z|00088|binding|INFO|Releasing lport 0d385036-42e8-4835-9d5d-981ad129264d from this chassis (sb_readonly=0)
Nov 25 11:01:24 compute-0 ovn_controller[97779]: 2025-11-25T11:01:24Z|00089|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.263 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.286 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.287 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.408 189385 DEBUG nova.compute.provider_tree [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.422 189385 DEBUG nova.scheduler.client.report [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.453 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.489 189385 INFO nova.scheduler.client.report [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Deleted allocations for instance 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.584 189385 DEBUG oslo_concurrency.lockutils [None req-858fddf3-8c3a-4033-bacd-e5ab8261898e b821e5c3d70f4dc78d5de14f250d8590 81c1c4c8c73c403d8d6b430858c11434 - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.593 189385 DEBUG nova.compute.manager [req-6e0f3f4f-cf52-44e9-b6b4-1b51935da149 req-f3079bcc-1b33-414c-8936-01bc0512b961 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.594 189385 DEBUG oslo_concurrency.lockutils [req-6e0f3f4f-cf52-44e9-b6b4-1b51935da149 req-f3079bcc-1b33-414c-8936-01bc0512b961 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.595 189385 DEBUG oslo_concurrency.lockutils [req-6e0f3f4f-cf52-44e9-b6b4-1b51935da149 req-f3079bcc-1b33-414c-8936-01bc0512b961 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.595 189385 DEBUG oslo_concurrency.lockutils [req-6e0f3f4f-cf52-44e9-b6b4-1b51935da149 req-f3079bcc-1b33-414c-8936-01bc0512b961 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.596 189385 DEBUG nova.compute.manager [req-6e0f3f4f-cf52-44e9-b6b4-1b51935da149 req-f3079bcc-1b33-414c-8936-01bc0512b961 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Processing event network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.599 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.611 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068484.610461, 388d7cfb-c9e5-413a-9649-93e137294b38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.612 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] VM Resumed (Lifecycle Event)
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.614 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.622 189385 INFO nova.virt.libvirt.driver [-] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Instance spawned successfully.
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.624 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.628 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.638 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.645 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.645 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.646 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.646 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.647 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.648 189385 DEBUG nova.virt.libvirt.driver [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.655 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:24 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:24.807 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.984 189385 INFO nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Took 13.09 seconds to spawn the instance on the hypervisor.
Nov 25 11:01:24 compute-0 nova_compute[189381]: 2025-11-25 11:01:24.985 189385 DEBUG nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.067 189385 INFO nova.compute.manager [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Took 14.07 seconds to build instance.
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.118 189385 DEBUG nova.network.neutron [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Updating instance_info_cache with network_info: [{"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.202 189385 DEBUG oslo_concurrency.lockutils [None req-8638dfff-cc5f-48fa-91fe-05ea4b2c6e04 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.318s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.355 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Releasing lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.356 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Instance network_info: |[{"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.359 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Start _get_guest_xml network_info=[{"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.365 189385 WARNING nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.370 189385 DEBUG nova.virt.libvirt.host [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.371 189385 DEBUG nova.virt.libvirt.host [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.376 189385 DEBUG nova.virt.libvirt.host [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.377 189385 DEBUG nova.virt.libvirt.host [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.377 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.377 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.378 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.378 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.379 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.379 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.379 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.380 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.380 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.380 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.381 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.381 189385 DEBUG nova.virt.hardware [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.384 189385 DEBUG nova.virt.libvirt.vif [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:01:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-125293488',display_name='tempest-ServersTestJSON-server-125293488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-125293488',id=9,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGITl5lRnX7slRfvHP/ELhw0LLrPRE0x1kdo8T5bb/17XUkB4sDlG3WkRA5AXjM/WNki8O2IF21t86HfzWDRbLiNGFj4HFYAo5Qj0GcQSI/wzBcPi8+QjYSvJwoJw0Ypsg==',key_name='tempest-keypair-774350768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04532f8fff61471495a338caf8c9670e',ramdisk_id='',reservation_id='r-rn4qfay3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1677364154',owner_user_name='tempest-ServersTestJSON-1677364154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:01:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dcfeee3b6d344d059499b78710287a87',uuid=46bfe581-82ad-4ba4-a5f9-4fff7ab4223a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.385 189385 DEBUG nova.network.os_vif_util [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Converting VIF {"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.386 189385 DEBUG nova.network.os_vif_util [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:e1:3b,bridge_name='br-int',has_traffic_filtering=True,id=2709535c-6a90-41ec-b6cf-556a36171fb4,network=Network(d2311348-22a3-40d9-9c9d-8ec92e308dc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2709535c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.387 189385 DEBUG nova.objects.instance [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lazy-loading 'pci_devices' on Instance uuid 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.398 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <uuid>46bfe581-82ad-4ba4-a5f9-4fff7ab4223a</uuid>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <name>instance-00000009</name>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <nova:name>tempest-ServersTestJSON-server-125293488</nova:name>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:01:25</nova:creationTime>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:user uuid="dcfeee3b6d344d059499b78710287a87">tempest-ServersTestJSON-1677364154-project-member</nova:user>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:project uuid="04532f8fff61471495a338caf8c9670e">tempest-ServersTestJSON-1677364154</nova:project>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         <nova:port uuid="2709535c-6a90-41ec-b6cf-556a36171fb4">
Nov 25 11:01:25 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <system>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <entry name="serial">46bfe581-82ad-4ba4-a5f9-4fff7ab4223a</entry>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <entry name="uuid">46bfe581-82ad-4ba4-a5f9-4fff7ab4223a</entry>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </system>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <os>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   </os>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <features>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   </features>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk.config"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:29:e1:3b"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <target dev="tap2709535c-6a"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/console.log" append="off"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <video>
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </video>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:01:25 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:01:25 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:01:25 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:01:25 compute-0 nova_compute[189381]: </domain>
Nov 25 11:01:25 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.399 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Preparing to wait for external event network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.400 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.400 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.400 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.401 189385 DEBUG nova.virt.libvirt.vif [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:01:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-125293488',display_name='tempest-ServersTestJSON-server-125293488',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-125293488',id=9,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGITl5lRnX7slRfvHP/ELhw0LLrPRE0x1kdo8T5bb/17XUkB4sDlG3WkRA5AXjM/WNki8O2IF21t86HfzWDRbLiNGFj4HFYAo5Qj0GcQSI/wzBcPi8+QjYSvJwoJw0Ypsg==',key_name='tempest-keypair-774350768',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04532f8fff61471495a338caf8c9670e',ramdisk_id='',reservation_id='r-rn4qfay3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1677364154',owner_user_name='tempest-ServersTestJSON-1677364154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:01:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dcfeee3b6d344d059499b78710287a87',uuid=46bfe581-82ad-4ba4-a5f9-4fff7ab4223a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.401 189385 DEBUG nova.network.os_vif_util [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Converting VIF {"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.402 189385 DEBUG nova.network.os_vif_util [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:e1:3b,bridge_name='br-int',has_traffic_filtering=True,id=2709535c-6a90-41ec-b6cf-556a36171fb4,network=Network(d2311348-22a3-40d9-9c9d-8ec92e308dc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2709535c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.402 189385 DEBUG os_vif [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:e1:3b,bridge_name='br-int',has_traffic_filtering=True,id=2709535c-6a90-41ec-b6cf-556a36171fb4,network=Network(d2311348-22a3-40d9-9c9d-8ec92e308dc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2709535c-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.403 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.403 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.404 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.411 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.412 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2709535c-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.412 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2709535c-6a, col_values=(('external_ids', {'iface-id': '2709535c-6a90-41ec-b6cf-556a36171fb4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:e1:3b', 'vm-uuid': '46bfe581-82ad-4ba4-a5f9-4fff7ab4223a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.414 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:25 compute-0 NetworkManager[56317]: <info>  [1764068485.4151] manager: (tap2709535c-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.416 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.424 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.425 189385 INFO os_vif [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:e1:3b,bridge_name='br-int',has_traffic_filtering=True,id=2709535c-6a90-41ec-b6cf-556a36171fb4,network=Network(d2311348-22a3-40d9-9c9d-8ec92e308dc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2709535c-6a')
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.484 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.485 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.485 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] No VIF found with MAC fa:16:3e:29:e1:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.486 189385 INFO nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Using config drive
Nov 25 11:01:25 compute-0 podman[252970]: 2025-11-25 11:01:25.519516875 +0000 UTC m=+0.056829686 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.575 189385 DEBUG nova.network.neutron [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updated VIF entry in instance network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.576 189385 DEBUG nova.network.neutron [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.598 189385 DEBUG oslo_concurrency.lockutils [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.599 189385 DEBUG nova.compute.manager [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.599 189385 DEBUG oslo_concurrency.lockutils [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.600 189385 DEBUG oslo_concurrency.lockutils [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.600 189385 DEBUG oslo_concurrency.lockutils [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.600 189385 DEBUG nova.compute.manager [req-3f44d868-bd71-4488-863a-18a9c8d8131d req-10c2cfdc-1f75-432d-97a2-9e66080027f4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Processing event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.601 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.605 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068485.6055245, c4d7af36-620f-46df-8347-4eaeed7856c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.606 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] VM Resumed (Lifecycle Event)
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.617 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.622 189385 INFO nova.virt.libvirt.driver [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance spawned successfully.
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.622 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.635 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.653 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.667 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.667 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.668 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.668 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.669 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.669 189385 DEBUG nova.virt.libvirt.driver [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.672 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.770 189385 INFO nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Took 16.51 seconds to spawn the instance on the hypervisor.
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.770 189385 DEBUG nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.838 189385 INFO nova.compute.manager [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Took 17.09 seconds to build instance.
Nov 25 11:01:25 compute-0 nova_compute[189381]: 2025-11-25 11:01:25.867 189385 DEBUG oslo_concurrency.lockutils [None req-a91efe1d-5c97-4eb8-86e2-c5573642bf22 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.199 189385 DEBUG nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.199 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.200 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.200 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.200 189385 DEBUG nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] No waiting events found dispatching network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.200 189385 WARNING nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received unexpected event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d for instance with vm_state active and task_state None.
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.200 189385 DEBUG nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-changed-2709535c-6a90-41ec-b6cf-556a36171fb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.200 189385 DEBUG nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Refreshing instance network info cache due to event network-changed-2709535c-6a90-41ec-b6cf-556a36171fb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.201 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.201 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.201 189385 DEBUG nova.network.neutron [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Refreshing network info cache for port 2709535c-6a90-41ec-b6cf-556a36171fb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.649 189385 INFO nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Creating config drive at /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk.config
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.656 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4oi4xh_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.784 189385 DEBUG oslo_concurrency.processutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_4oi4xh_" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:26 compute-0 kernel: tap2709535c-6a: entered promiscuous mode
Nov 25 11:01:26 compute-0 NetworkManager[56317]: <info>  [1764068486.8441] manager: (tap2709535c-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.846 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:26 compute-0 ovn_controller[97779]: 2025-11-25T11:01:26Z|00090|binding|INFO|Claiming lport 2709535c-6a90-41ec-b6cf-556a36171fb4 for this chassis.
Nov 25 11:01:26 compute-0 ovn_controller[97779]: 2025-11-25T11:01:26Z|00091|binding|INFO|2709535c-6a90-41ec-b6cf-556a36171fb4: Claiming fa:16:3e:29:e1:3b 10.100.0.7
Nov 25 11:01:26 compute-0 systemd-machined[155706]: New machine qemu-9-instance-00000009.
Nov 25 11:01:26 compute-0 systemd-udevd[253013]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.917 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:26 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 25 11:01:26 compute-0 ovn_controller[97779]: 2025-11-25T11:01:26Z|00092|binding|INFO|Setting lport 2709535c-6a90-41ec-b6cf-556a36171fb4 ovn-installed in OVS
Nov 25 11:01:26 compute-0 NetworkManager[56317]: <info>  [1764068486.9258] device (tap2709535c-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:01:26 compute-0 nova_compute[189381]: 2025-11-25 11:01:26.927 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:26 compute-0 NetworkManager[56317]: <info>  [1764068486.9298] device (tap2709535c-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:01:26 compute-0 ovn_controller[97779]: 2025-11-25T11:01:26Z|00093|binding|INFO|Setting lport 2709535c-6a90-41ec-b6cf-556a36171fb4 up in Southbound
Nov 25 11:01:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:26.987 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:e1:3b 10.100.0.7'], port_security=['fa:16:3e:29:e1:3b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '46bfe581-82ad-4ba4-a5f9-4fff7ab4223a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04532f8fff61471495a338caf8c9670e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0dfa75b0-9449-4b88-9b30-551d065178ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a8d5b58-1cd8-44e9-a65b-e7c2e424bdb7, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=2709535c-6a90-41ec-b6cf-556a36171fb4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:01:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:26.988 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 2709535c-6a90-41ec-b6cf-556a36171fb4 in datapath d2311348-22a3-40d9-9c9d-8ec92e308dc8 bound to our chassis
Nov 25 11:01:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:26.990 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2311348-22a3-40d9-9c9d-8ec92e308dc8
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.001 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7457b70f-f6f8-49bd-b8a5-32aa68d27d51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.002 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2311348-21 in ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.004 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2311348-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.004 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[166f1061-c3c6-4799-ab2c-29d059fe0a68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.006 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4a2687-cf1b-4df8-9085-f8411c158e3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.020 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[7b23c9f4-6026-49cb-a4e5-dbc25622bc88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.047 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2f532f88-f2ad-488d-8e8b-da307fe6794d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.079 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[de4efd8b-79ba-4401-9d23-0df677f84d6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.090 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[1790a762-b643-470d-85fb-dc003759861c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 NetworkManager[56317]: <info>  [1764068487.0935] manager: (tapd2311348-20): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.126 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[1e88200c-5fc2-4116-b49b-f5183eef5fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.131 189385 DEBUG nova.compute.manager [req-d2e89744-e95b-4a89-9ad8-5a662936714e req-f2eec2b1-44b7-4eff-a3a4-cddddf492694 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.131 189385 DEBUG oslo_concurrency.lockutils [req-d2e89744-e95b-4a89-9ad8-5a662936714e req-f2eec2b1-44b7-4eff-a3a4-cddddf492694 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.131 189385 DEBUG oslo_concurrency.lockutils [req-d2e89744-e95b-4a89-9ad8-5a662936714e req-f2eec2b1-44b7-4eff-a3a4-cddddf492694 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.132 189385 DEBUG oslo_concurrency.lockutils [req-d2e89744-e95b-4a89-9ad8-5a662936714e req-f2eec2b1-44b7-4eff-a3a4-cddddf492694 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.132 189385 DEBUG nova.compute.manager [req-d2e89744-e95b-4a89-9ad8-5a662936714e req-f2eec2b1-44b7-4eff-a3a4-cddddf492694 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] No waiting events found dispatching network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.132 189385 WARNING nova.compute.manager [req-d2e89744-e95b-4a89-9ad8-5a662936714e req-f2eec2b1-44b7-4eff-a3a4-cddddf492694 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received unexpected event network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa for instance with vm_state active and task_state None.
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.131 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[7bacda17-03c7-4ebc-a1a9-8b2fe1520476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 NetworkManager[56317]: <info>  [1764068487.1553] device (tapd2311348-20): carrier: link connected
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.162 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1f18fd-2210-4523-8eea-f29c255dc689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.181 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[fe599684-ca69-49fd-abad-b69863ecca5b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2311348-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:76:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539473, 'reachable_time': 21371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253048, 'error': None, 'target': 'ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.196 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[737adee8-e86b-4aa8-9d0a-2a45777bc125]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:76ae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 539473, 'tstamp': 539473}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253051, 'error': None, 'target': 'ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.216 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8b51c567-606b-4f81-8224-539000f7b57d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2311348-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:76:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539473, 'reachable_time': 21371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253055, 'error': None, 'target': 'ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.246 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0926a156-93e4-4a32-b27a-b88686ad0053]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.308 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068487.3078618, 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.308 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] VM Started (Lifecycle Event)
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.318 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[96d9c023-316f-481f-83b6-621d2a335309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.319 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2311348-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.320 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.320 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2311348-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:27 compute-0 NetworkManager[56317]: <info>  [1764068487.3230] manager: (tapd2311348-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 25 11:01:27 compute-0 kernel: tapd2311348-20: entered promiscuous mode
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.325 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2311348-20, col_values=(('external_ids', {'iface-id': '0db92813-e5bc-4e6f-9d0c-d4beb1180c8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.327 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:27 compute-0 ovn_controller[97779]: 2025-11-25T11:01:27Z|00094|binding|INFO|Releasing lport 0db92813-e5bc-4e6f-9d0c-d4beb1180c8b from this chassis (sb_readonly=0)
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.331 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.334 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.342 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2311348-22a3-40d9-9c9d-8ec92e308dc8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2311348-22a3-40d9-9c9d-8ec92e308dc8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.343 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd89224-89b1-45ca-afbf-e91bebb0741e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.345 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-d2311348-22a3-40d9-9c9d-8ec92e308dc8
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/d2311348-22a3-40d9-9c9d-8ec92e308dc8.pid.haproxy
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID d2311348-22a3-40d9-9c9d-8ec92e308dc8
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:01:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:27.345 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'env', 'PROCESS_TAG=haproxy-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2311348-22a3-40d9-9c9d-8ec92e308dc8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.350 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068487.3079278, 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.350 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] VM Paused (Lifecycle Event)
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.352 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.365 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.370 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:27 compute-0 nova_compute[189381]: 2025-11-25 11:01:27.388 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:27 compute-0 podman[253089]: 2025-11-25 11:01:27.738617847 +0000 UTC m=+0.064738625 container create 9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 25 11:01:27 compute-0 systemd[1]: Started libpod-conmon-9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf.scope.
Nov 25 11:01:27 compute-0 podman[253089]: 2025-11-25 11:01:27.704222261 +0000 UTC m=+0.030343069 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:01:27 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:01:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3bb2b5eb352b7a87cce0c732f487fe6361edd3e9c2c63b2b1ad10efff73284/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:01:27 compute-0 podman[253089]: 2025-11-25 11:01:27.844410539 +0000 UTC m=+0.170531337 container init 9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:01:27 compute-0 podman[253089]: 2025-11-25 11:01:27.85479783 +0000 UTC m=+0.180918608 container start 9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:01:27 compute-0 neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8[253103]: [NOTICE]   (253107) : New worker (253109) forked
Nov 25 11:01:27 compute-0 neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8[253103]: [NOTICE]   (253107) : Loading success.
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.324 189385 DEBUG nova.compute.manager [req-8403247a-7c0c-4ec7-99ed-c375f0617d23 req-eddffc6a-ba2e-4cab-be84-3f510afb67dd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.324 189385 DEBUG oslo_concurrency.lockutils [req-8403247a-7c0c-4ec7-99ed-c375f0617d23 req-eddffc6a-ba2e-4cab-be84-3f510afb67dd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.324 189385 DEBUG oslo_concurrency.lockutils [req-8403247a-7c0c-4ec7-99ed-c375f0617d23 req-eddffc6a-ba2e-4cab-be84-3f510afb67dd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.325 189385 DEBUG oslo_concurrency.lockutils [req-8403247a-7c0c-4ec7-99ed-c375f0617d23 req-eddffc6a-ba2e-4cab-be84-3f510afb67dd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.325 189385 DEBUG nova.compute.manager [req-8403247a-7c0c-4ec7-99ed-c375f0617d23 req-eddffc6a-ba2e-4cab-be84-3f510afb67dd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Processing event network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.325 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.329 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068488.32905, 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.329 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] VM Resumed (Lifecycle Event)
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.331 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.335 189385 INFO nova.virt.libvirt.driver [-] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Instance spawned successfully.
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.336 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.356 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.364 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.364 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.365 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.365 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.365 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.366 189385 DEBUG nova.virt.libvirt.driver [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.371 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.401 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.428 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.700 189385 INFO nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Took 13.58 seconds to spawn the instance on the hypervisor.
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.700 189385 DEBUG nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:28 compute-0 nova_compute[189381]: 2025-11-25 11:01:28.921 189385 INFO nova.compute.manager [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Took 14.58 seconds to build instance.
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.151 189385 DEBUG oslo_concurrency.lockutils [None req-72183aad-8053-4215-b4ec-b1bd45b83ee6 dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.286 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.286 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.287 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.287 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.381 189385 DEBUG nova.network.neutron [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Updated VIF entry in instance network info cache for port 2709535c-6a90-41ec-b6cf-556a36171fb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.382 189385 DEBUG nova.network.neutron [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Updating instance_info_cache with network_info: [{"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.398 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.398 189385 DEBUG nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Received event network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.398 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.398 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.399 189385 DEBUG oslo_concurrency.lockutils [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "7a2ec38f-d9cc-45cf-8338-fe982e25d7e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.399 189385 DEBUG nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] No waiting events found dispatching network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.399 189385 WARNING nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Received unexpected event network-vif-plugged-4b99e8ff-a6c5-4046-9654-a09c32b9646b for instance with vm_state deleted and task_state None.
Nov 25 11:01:29 compute-0 nova_compute[189381]: 2025-11-25 11:01:29.399 189385 DEBUG nova.compute.manager [req-b870e7d3-4981-4c54-b628-fa36c7b628c1 req-d1845102-3ae4-4c16-968c-37159f3da24f d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Received event network-vif-deleted-4b99e8ff-a6c5-4046-9654-a09c32b9646b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:29 compute-0 podman[203557]: time="2025-11-25T11:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:01:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Nov 25 11:01:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5719 "" "Go-http-client/1.1"
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.342 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:30 compute-0 NetworkManager[56317]: <info>  [1764068490.3438] manager: (patch-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 25 11:01:30 compute-0 NetworkManager[56317]: <info>  [1764068490.3472] manager: (patch-br-int-to-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.414 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.450 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:30 compute-0 ovn_controller[97779]: 2025-11-25T11:01:30Z|00095|binding|INFO|Releasing lport 0db92813-e5bc-4e6f-9d0c-d4beb1180c8b from this chassis (sb_readonly=0)
Nov 25 11:01:30 compute-0 ovn_controller[97779]: 2025-11-25T11:01:30Z|00096|binding|INFO|Releasing lport 0d385036-42e8-4835-9d5d-981ad129264d from this chassis (sb_readonly=0)
Nov 25 11:01:30 compute-0 ovn_controller[97779]: 2025-11-25T11:01:30Z|00097|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.478 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.490 189385 DEBUG nova.compute.manager [req-05e4ec88-6df3-4b14-8200-2be37b82bce5 req-182ec754-4d2a-4e2c-8e84-4951562219e2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.491 189385 DEBUG oslo_concurrency.lockutils [req-05e4ec88-6df3-4b14-8200-2be37b82bce5 req-182ec754-4d2a-4e2c-8e84-4951562219e2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.491 189385 DEBUG oslo_concurrency.lockutils [req-05e4ec88-6df3-4b14-8200-2be37b82bce5 req-182ec754-4d2a-4e2c-8e84-4951562219e2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.492 189385 DEBUG oslo_concurrency.lockutils [req-05e4ec88-6df3-4b14-8200-2be37b82bce5 req-182ec754-4d2a-4e2c-8e84-4951562219e2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.492 189385 DEBUG nova.compute.manager [req-05e4ec88-6df3-4b14-8200-2be37b82bce5 req-182ec754-4d2a-4e2c-8e84-4951562219e2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] No waiting events found dispatching network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:01:30 compute-0 nova_compute[189381]: 2025-11-25 11:01:30.493 189385 WARNING nova.compute.manager [req-05e4ec88-6df3-4b14-8200-2be37b82bce5 req-182ec754-4d2a-4e2c-8e84-4951562219e2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received unexpected event network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 for instance with vm_state active and task_state None.
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.116 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updating instance_info_cache with network_info: [{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.247 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.248 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.249 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.249 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.268 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.269 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.270 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.270 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.360 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:31 compute-0 openstack_network_exporter[205722]: ERROR   11:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:01:31 compute-0 openstack_network_exporter[205722]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:01:31 compute-0 openstack_network_exporter[205722]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:01:31 compute-0 openstack_network_exporter[205722]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:01:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:01:31 compute-0 openstack_network_exporter[205722]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:01:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.438 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.439 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.507 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.513 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.570 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.571 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.627 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.634 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.696 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.698 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:01:31 compute-0 nova_compute[189381]: 2025-11-25 11:01:31.758 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.180 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.182 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4988MB free_disk=72.16136932373047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.183 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.184 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.258 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance c4d7af36-620f-46df-8347-4eaeed7856c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.259 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 388d7cfb-c9e5-413a-9649-93e137294b38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.260 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.260 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.261 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.351 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.365 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.388 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.389 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.816 189385 DEBUG nova.compute.manager [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.817 189385 DEBUG nova.compute.manager [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing instance network info cache due to event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.818 189385 DEBUG oslo_concurrency.lockutils [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.819 189385 DEBUG oslo_concurrency.lockutils [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:32 compute-0 nova_compute[189381]: 2025-11-25 11:01:32.819 189385 DEBUG nova.network.neutron [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:01:33 compute-0 nova_compute[189381]: 2025-11-25 11:01:33.403 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:34 compute-0 ovn_controller[97779]: 2025-11-25T11:01:34Z|00098|binding|INFO|Releasing lport 0db92813-e5bc-4e6f-9d0c-d4beb1180c8b from this chassis (sb_readonly=0)
Nov 25 11:01:34 compute-0 ovn_controller[97779]: 2025-11-25T11:01:34Z|00099|binding|INFO|Releasing lport 0d385036-42e8-4835-9d5d-981ad129264d from this chassis (sb_readonly=0)
Nov 25 11:01:34 compute-0 ovn_controller[97779]: 2025-11-25T11:01:34Z|00100|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:01:34 compute-0 nova_compute[189381]: 2025-11-25 11:01:34.334 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:34 compute-0 podman[253139]: 2025-11-25 11:01:34.95586534 +0000 UTC m=+0.070524472 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:01:35 compute-0 podman[253138]: 2025-11-25 11:01:34.998985098 +0000 UTC m=+0.113367292 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.133 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.134 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.135 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.135 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.136 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.137 189385 INFO nova.compute.manager [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Terminating instance
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.139 189385 DEBUG nova.compute.manager [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:01:35 compute-0 kernel: tap2709535c-6a (unregistering): left promiscuous mode
Nov 25 11:01:35 compute-0 NetworkManager[56317]: <info>  [1764068495.1654] device (tap2709535c-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:01:35 compute-0 ovn_controller[97779]: 2025-11-25T11:01:35Z|00101|binding|INFO|Releasing lport 2709535c-6a90-41ec-b6cf-556a36171fb4 from this chassis (sb_readonly=0)
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.182 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:35 compute-0 ovn_controller[97779]: 2025-11-25T11:01:35Z|00102|binding|INFO|Setting lport 2709535c-6a90-41ec-b6cf-556a36171fb4 down in Southbound
Nov 25 11:01:35 compute-0 ovn_controller[97779]: 2025-11-25T11:01:35Z|00103|binding|INFO|Removing iface tap2709535c-6a ovn-installed in OVS
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.196 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:e1:3b 10.100.0.7'], port_security=['fa:16:3e:29:e1:3b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '46bfe581-82ad-4ba4-a5f9-4fff7ab4223a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04532f8fff61471495a338caf8c9670e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0dfa75b0-9449-4b88-9b30-551d065178ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.178'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a8d5b58-1cd8-44e9-a65b-e7c2e424bdb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=2709535c-6a90-41ec-b6cf-556a36171fb4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.197 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 2709535c-6a90-41ec-b6cf-556a36171fb4 in datapath d2311348-22a3-40d9-9c9d-8ec92e308dc8 unbound from our chassis
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.205 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2311348-22a3-40d9-9c9d-8ec92e308dc8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.210 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[3375cf06-59b0-471f-adf3-6327b06be531]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.211 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8 namespace which is not needed anymore
Nov 25 11:01:35 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 25 11:01:35 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 7.337s CPU time.
Nov 25 11:01:35 compute-0 systemd-machined[155706]: Machine qemu-9-instance-00000009 terminated.
Nov 25 11:01:35 compute-0 ovn_controller[97779]: 2025-11-25T11:01:35Z|00104|binding|INFO|Releasing lport 0db92813-e5bc-4e6f-9d0c-d4beb1180c8b from this chassis (sb_readonly=0)
Nov 25 11:01:35 compute-0 ovn_controller[97779]: 2025-11-25T11:01:35Z|00105|binding|INFO|Releasing lport 0d385036-42e8-4835-9d5d-981ad129264d from this chassis (sb_readonly=0)
Nov 25 11:01:35 compute-0 ovn_controller[97779]: 2025-11-25T11:01:35Z|00106|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.205 189385 DEBUG nova.compute.manager [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-changed-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.336 189385 DEBUG nova.compute.manager [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Refreshing instance network info cache due to event network-changed-5a6cf231-3edc-4338-bb8e-74f0f7e6672d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.337 189385 DEBUG oslo_concurrency.lockutils [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.337 189385 DEBUG oslo_concurrency.lockutils [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.337 189385 DEBUG nova.network.neutron [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Refreshing network info cache for port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.339 189385 DEBUG nova.network.neutron [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updated VIF entry in instance network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.340 189385 DEBUG nova.network.neutron [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.341 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.359 189385 DEBUG oslo_concurrency.lockutils [req-1fde1f51-4c27-416f-a9f4-5aba433afffe req-1e1092f6-0081-4812-a88e-31a1731fd284 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:35 compute-0 neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8[253103]: [NOTICE]   (253107) : haproxy version is 2.8.14-c23fe91
Nov 25 11:01:35 compute-0 neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8[253103]: [NOTICE]   (253107) : path to executable is /usr/sbin/haproxy
Nov 25 11:01:35 compute-0 neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8[253103]: [WARNING]  (253107) : Exiting Master process...
Nov 25 11:01:35 compute-0 neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8[253103]: [ALERT]    (253107) : Current worker (253109) exited with code 143 (Terminated)
Nov 25 11:01:35 compute-0 neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8[253103]: [WARNING]  (253107) : All workers exited. Exiting... (0)
Nov 25 11:01:35 compute-0 systemd[1]: libpod-9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf.scope: Deactivated successfully.
Nov 25 11:01:35 compute-0 conmon[253103]: conmon 9bbf438f60c4693438bb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf.scope/container/memory.events
Nov 25 11:01:35 compute-0 podman[253200]: 2025-11-25 11:01:35.382681084 +0000 UTC m=+0.062792038 container died 9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.408 189385 INFO nova.virt.libvirt.driver [-] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Instance destroyed successfully.
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.409 189385 DEBUG nova.objects.instance [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lazy-loading 'resources' on Instance uuid 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.415 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf-userdata-shm.mount: Deactivated successfully.
Nov 25 11:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-be3bb2b5eb352b7a87cce0c732f487fe6361edd3e9c2c63b2b1ad10efff73284-merged.mount: Deactivated successfully.
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.423 189385 DEBUG nova.virt.libvirt.vif [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:01:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-125293488',display_name='tempest-ServersTestJSON-server-125293488',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-125293488',id=9,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGITl5lRnX7slRfvHP/ELhw0LLrPRE0x1kdo8T5bb/17XUkB4sDlG3WkRA5AXjM/WNki8O2IF21t86HfzWDRbLiNGFj4HFYAo5Qj0GcQSI/wzBcPi8+QjYSvJwoJw0Ypsg==',key_name='tempest-keypair-774350768',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:01:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04532f8fff61471495a338caf8c9670e',ramdisk_id='',reservation_id='r-rn4qfay3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1677364154',owner_user_name='tempest-ServersTestJSON-1677364154-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:01:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dcfeee3b6d344d059499b78710287a87',uuid=46bfe581-82ad-4ba4-a5f9-4fff7ab4223a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.430 189385 DEBUG nova.network.os_vif_util [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Converting VIF {"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.431 189385 DEBUG nova.network.os_vif_util [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:e1:3b,bridge_name='br-int',has_traffic_filtering=True,id=2709535c-6a90-41ec-b6cf-556a36171fb4,network=Network(d2311348-22a3-40d9-9c9d-8ec92e308dc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2709535c-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.432 189385 DEBUG os_vif [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:e1:3b,bridge_name='br-int',has_traffic_filtering=True,id=2709535c-6a90-41ec-b6cf-556a36171fb4,network=Network(d2311348-22a3-40d9-9c9d-8ec92e308dc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2709535c-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.434 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.434 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2709535c-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.436 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:35 compute-0 podman[253200]: 2025-11-25 11:01:35.43918659 +0000 UTC m=+0.119297534 container cleanup 9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.439 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.442 189385 INFO os_vif [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:e1:3b,bridge_name='br-int',has_traffic_filtering=True,id=2709535c-6a90-41ec-b6cf-556a36171fb4,network=Network(d2311348-22a3-40d9-9c9d-8ec92e308dc8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2709535c-6a')
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.442 189385 INFO nova.virt.libvirt.driver [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Deleting instance files /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a_del
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.443 189385 INFO nova.virt.libvirt.driver [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Deletion of /var/lib/nova/instances/46bfe581-82ad-4ba4-a5f9-4fff7ab4223a_del complete
Nov 25 11:01:35 compute-0 systemd[1]: libpod-conmon-9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf.scope: Deactivated successfully.
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.505 189385 INFO nova.compute.manager [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Took 0.37 seconds to destroy the instance on the hypervisor.
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.506 189385 DEBUG oslo.service.loopingcall [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.506 189385 DEBUG nova.compute.manager [-] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.506 189385 DEBUG nova.network.neutron [-] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:01:35 compute-0 podman[253243]: 2025-11-25 11:01:35.546817315 +0000 UTC m=+0.083722284 container remove 9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.557 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[c094d30e-95cd-4bdc-a002-c8814521d2cf]: (4, ('Tue Nov 25 11:01:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8 (9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf)\n9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf\nTue Nov 25 11:01:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8 (9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf)\n9bbf438f60c4693438bb2fd0fba006b87ede0f26e0410a74bd4a41b3d20800cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.559 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb1998f-c9ed-43f4-83b1-3d44c916b282]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.560 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2311348-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.563 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:35 compute-0 kernel: tapd2311348-20: left promiscuous mode
Nov 25 11:01:35 compute-0 nova_compute[189381]: 2025-11-25 11:01:35.584 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.586 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbbf891-3331-41c0-a9c9-f25505142955]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.607 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6f70df5a-9dc9-4339-8266-a9fbe2bee423]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.608 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[11f47008-530e-4011-b01f-6eb10701e6d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.623 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cd8b12-b4e7-4bf9-b9f8-f1b7f8667a53]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539465, 'reachable_time': 43753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253255, 'error': None, 'target': 'ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:35 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2311348\x2d22a3\x2d40d9\x2d9c9d\x2d8ec92e308dc8.mount: Deactivated successfully.
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.629 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2311348-22a3-40d9-9c9d-8ec92e308dc8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:01:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:35.629 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[19496afd-19eb-4b4f-85d8-8bf416c12abd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:01:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:36.066 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:36.066 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:01:36.068 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.161 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.162 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.586 189385 DEBUG nova.network.neutron [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updated VIF entry in instance network info cache for port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.587 189385 DEBUG nova.network.neutron [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updating instance_info_cache with network_info: [{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.606 189385 DEBUG oslo_concurrency.lockutils [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.606 189385 DEBUG nova.compute.manager [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-changed-2709535c-6a90-41ec-b6cf-556a36171fb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.606 189385 DEBUG nova.compute.manager [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Refreshing instance network info cache due to event network-changed-2709535c-6a90-41ec-b6cf-556a36171fb4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.606 189385 DEBUG oslo_concurrency.lockutils [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.607 189385 DEBUG oslo_concurrency.lockutils [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.607 189385 DEBUG nova.network.neutron [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Refreshing network info cache for port 2709535c-6a90-41ec-b6cf-556a36171fb4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.733 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068481.7323947, 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.734 189385 INFO nova.compute.manager [-] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] VM Stopped (Lifecycle Event)
Nov 25 11:01:36 compute-0 nova_compute[189381]: 2025-11-25 11:01:36.752 189385 DEBUG nova.compute.manager [None req-42e2354b-6791-460e-967f-c22008e89abd - - - - - -] [instance: 7a2ec38f-d9cc-45cf-8338-fe982e25d7e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.438 189385 DEBUG nova.compute.manager [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-vif-unplugged-2709535c-6a90-41ec-b6cf-556a36171fb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.439 189385 DEBUG oslo_concurrency.lockutils [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.439 189385 DEBUG oslo_concurrency.lockutils [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.439 189385 DEBUG oslo_concurrency.lockutils [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.439 189385 DEBUG nova.compute.manager [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] No waiting events found dispatching network-vif-unplugged-2709535c-6a90-41ec-b6cf-556a36171fb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.439 189385 DEBUG nova.compute.manager [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-vif-unplugged-2709535c-6a90-41ec-b6cf-556a36171fb4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.440 189385 DEBUG nova.compute.manager [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.440 189385 DEBUG oslo_concurrency.lockutils [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.440 189385 DEBUG oslo_concurrency.lockutils [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.440 189385 DEBUG oslo_concurrency.lockutils [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.440 189385 DEBUG nova.compute.manager [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] No waiting events found dispatching network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:01:37 compute-0 nova_compute[189381]: 2025-11-25 11:01:37.441 189385 WARNING nova.compute.manager [req-ba8a871c-bd21-43e6-ba7b-d6e53b6fbd77 req-ee2eb653-ac8d-4f33-99d3-6ad23a5fc2f0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received unexpected event network-vif-plugged-2709535c-6a90-41ec-b6cf-556a36171fb4 for instance with vm_state active and task_state deleting.
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.173 189385 DEBUG nova.network.neutron [-] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.230 189385 INFO nova.compute.manager [-] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Took 2.72 seconds to deallocate network for instance.
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.340 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.340 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.405 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.448 189385 DEBUG nova.compute.provider_tree [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.463 189385 DEBUG nova.scheduler.client.report [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.491 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.523 189385 INFO nova.scheduler.client.report [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Deleted allocations for instance 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.612 189385 DEBUG oslo_concurrency.lockutils [None req-37fcea71-8ddc-4bb8-8999-18691f8fd16c dcfeee3b6d344d059499b78710287a87 04532f8fff61471495a338caf8c9670e - - default default] Lock "46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.925 189385 DEBUG nova.network.neutron [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Updated VIF entry in instance network info cache for port 2709535c-6a90-41ec-b6cf-556a36171fb4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.926 189385 DEBUG nova.network.neutron [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Updating instance_info_cache with network_info: [{"id": "2709535c-6a90-41ec-b6cf-556a36171fb4", "address": "fa:16:3e:29:e1:3b", "network": {"id": "d2311348-22a3-40d9-9c9d-8ec92e308dc8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1562872099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04532f8fff61471495a338caf8c9670e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2709535c-6a", "ovs_interfaceid": "2709535c-6a90-41ec-b6cf-556a36171fb4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:01:38 compute-0 nova_compute[189381]: 2025-11-25 11:01:38.950 189385 DEBUG oslo_concurrency.lockutils [req-57a38a4e-3806-4680-9b6f-852666ed9cbd req-531b128b-5c43-41e0-95ea-f440cf97c222 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-46bfe581-82ad-4ba4-a5f9-4fff7ab4223a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:01:38 compute-0 podman[253256]: 2025-11-25 11:01:38.976342543 +0000 UTC m=+0.076401173 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc.)
Nov 25 11:01:39 compute-0 nova_compute[189381]: 2025-11-25 11:01:39.690 189385 DEBUG nova.compute.manager [req-5df909b8-70d3-4f1c-a507-42725b0a3b74 req-5f0314e8-20c6-43b5-addb-c97d75d3f234 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Received event network-vif-deleted-2709535c-6a90-41ec-b6cf-556a36171fb4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:01:40 compute-0 nova_compute[189381]: 2025-11-25 11:01:40.440 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:43 compute-0 nova_compute[189381]: 2025-11-25 11:01:43.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:01:43 compute-0 nova_compute[189381]: 2025-11-25 11:01:43.410 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:44 compute-0 podman[253276]: 2025-11-25 11:01:44.77516606 +0000 UTC m=+0.098758090 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:01:45 compute-0 nova_compute[189381]: 2025-11-25 11:01:45.443 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:47 compute-0 podman[253296]: 2025-11-25 11:01:47.952031924 +0000 UTC m=+0.062731977 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 11:01:47 compute-0 podman[253295]: 2025-11-25 11:01:47.977161431 +0000 UTC m=+0.086924487 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Nov 25 11:01:48 compute-0 nova_compute[189381]: 2025-11-25 11:01:48.413 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:48 compute-0 ovn_controller[97779]: 2025-11-25T11:01:48Z|00107|binding|INFO|Releasing lport 0d385036-42e8-4835-9d5d-981ad129264d from this chassis (sb_readonly=0)
Nov 25 11:01:48 compute-0 ovn_controller[97779]: 2025-11-25T11:01:48Z|00108|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:01:48 compute-0 nova_compute[189381]: 2025-11-25 11:01:48.653 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:50 compute-0 nova_compute[189381]: 2025-11-25 11:01:50.402 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068495.4004323, 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:01:50 compute-0 nova_compute[189381]: 2025-11-25 11:01:50.403 189385 INFO nova.compute.manager [-] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] VM Stopped (Lifecycle Event)
Nov 25 11:01:50 compute-0 nova_compute[189381]: 2025-11-25 11:01:50.422 189385 DEBUG nova.compute.manager [None req-b53ef5be-b9b6-4fbc-81da-e86849656309 - - - - - -] [instance: 46bfe581-82ad-4ba4-a5f9-4fff7ab4223a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:01:50 compute-0 nova_compute[189381]: 2025-11-25 11:01:50.447 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:51 compute-0 podman[253336]: 2025-11-25 11:01:51.9778288 +0000 UTC m=+0.090622944 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 25 11:01:51 compute-0 podman[253335]: 2025-11-25 11:01:51.99717841 +0000 UTC m=+0.110830119 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:01:53 compute-0 nova_compute[189381]: 2025-11-25 11:01:53.415 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:55 compute-0 nova_compute[189381]: 2025-11-25 11:01:55.450 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:55 compute-0 podman[253377]: 2025-11-25 11:01:55.972600218 +0000 UTC m=+0.074365353 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:01:56 compute-0 ovn_controller[97779]: 2025-11-25T11:01:56Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:ff:2a 10.100.0.6
Nov 25 11:01:56 compute-0 ovn_controller[97779]: 2025-11-25T11:01:56Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:ff:2a 10.100.0.6
Nov 25 11:01:58 compute-0 nova_compute[189381]: 2025-11-25 11:01:58.418 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:01:59 compute-0 ovn_controller[97779]: 2025-11-25T11:01:59Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:92:e1:52 10.100.0.14
Nov 25 11:01:59 compute-0 ovn_controller[97779]: 2025-11-25T11:01:59Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:92:e1:52 10.100.0.14
Nov 25 11:01:59 compute-0 podman[203557]: time="2025-11-25T11:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:01:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Nov 25 11:01:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5261 "" "Go-http-client/1.1"
Nov 25 11:02:00 compute-0 nova_compute[189381]: 2025-11-25 11:02:00.453 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:01 compute-0 openstack_network_exporter[205722]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:02:01 compute-0 openstack_network_exporter[205722]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:02:01 compute-0 openstack_network_exporter[205722]: ERROR   11:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:02:01 compute-0 openstack_network_exporter[205722]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:02:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:02:01 compute-0 openstack_network_exporter[205722]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:02:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:02:03 compute-0 nova_compute[189381]: 2025-11-25 11:02:03.421 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:05 compute-0 nova_compute[189381]: 2025-11-25 11:02:05.458 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:05 compute-0 podman[253419]: 2025-11-25 11:02:05.98004855 +0000 UTC m=+0.081048627 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:02:05 compute-0 podman[253418]: 2025-11-25 11:02:05.997279069 +0000 UTC m=+0.096906196 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm)
Nov 25 11:02:08 compute-0 nova_compute[189381]: 2025-11-25 11:02:08.423 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:09 compute-0 podman[253456]: 2025-11-25 11:02:09.94657863 +0000 UTC m=+0.061712008 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, name=ubi9, architecture=x86_64, container_name=kepler, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git)
Nov 25 11:02:10 compute-0 nova_compute[189381]: 2025-11-25 11:02:10.464 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:13 compute-0 nova_compute[189381]: 2025-11-25 11:02:13.426 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:14 compute-0 podman[253476]: 2025-11-25 11:02:14.960345034 +0000 UTC m=+0.076836565 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 25 11:02:15 compute-0 nova_compute[189381]: 2025-11-25 11:02:15.117 189385 DEBUG nova.objects.instance [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lazy-loading 'flavor' on Instance uuid 388d7cfb-c9e5-413a-9649-93e137294b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:15 compute-0 nova_compute[189381]: 2025-11-25 11:02:15.160 189385 DEBUG oslo_concurrency.lockutils [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:02:15 compute-0 nova_compute[189381]: 2025-11-25 11:02:15.160 189385 DEBUG oslo_concurrency.lockutils [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:02:15 compute-0 nova_compute[189381]: 2025-11-25 11:02:15.466 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:17 compute-0 nova_compute[189381]: 2025-11-25 11:02:17.761 189385 DEBUG nova.network.neutron [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:02:18 compute-0 nova_compute[189381]: 2025-11-25 11:02:18.116 189385 DEBUG nova.compute.manager [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:18 compute-0 nova_compute[189381]: 2025-11-25 11:02:18.116 189385 DEBUG nova.compute.manager [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing instance network info cache due to event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:02:18 compute-0 nova_compute[189381]: 2025-11-25 11:02:18.117 189385 DEBUG oslo_concurrency.lockutils [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:02:18 compute-0 nova_compute[189381]: 2025-11-25 11:02:18.428 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:18 compute-0 podman[253495]: 2025-11-25 11:02:18.953854496 +0000 UTC m=+0.058354610 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:02:18 compute-0 podman[253494]: 2025-11-25 11:02:18.95987685 +0000 UTC m=+0.067780073 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 25 11:02:20 compute-0 nova_compute[189381]: 2025-11-25 11:02:20.129 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:20 compute-0 nova_compute[189381]: 2025-11-25 11:02:20.129 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:20 compute-0 nova_compute[189381]: 2025-11-25 11:02:20.130 189385 INFO nova.compute.manager [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Rebooting instance
Nov 25 11:02:20 compute-0 nova_compute[189381]: 2025-11-25 11:02:20.145 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:02:20 compute-0 nova_compute[189381]: 2025-11-25 11:02:20.146 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquired lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:02:20 compute-0 nova_compute[189381]: 2025-11-25 11:02:20.146 189385 DEBUG nova.network.neutron [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:02:20 compute-0 nova_compute[189381]: 2025-11-25 11:02:20.470 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:22 compute-0 podman[253538]: 2025-11-25 11:02:22.975646626 +0000 UTC m=+0.083345223 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 11:02:23 compute-0 podman[253537]: 2025-11-25 11:02:23.003951625 +0000 UTC m=+0.116645577 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:02:23 compute-0 nova_compute[189381]: 2025-11-25 11:02:23.430 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:25 compute-0 nova_compute[189381]: 2025-11-25 11:02:25.474 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.115 189385 DEBUG nova.network.neutron [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.247 189385 DEBUG oslo_concurrency.lockutils [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.248 189385 DEBUG nova.compute.manager [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.248 189385 DEBUG nova.compute.manager [None req-d31d21d0-e89a-4db1-86c1-57988868c0f0 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] network_info to inject: |[{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.251 189385 DEBUG oslo_concurrency.lockutils [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.252 189385 DEBUG nova.network.neutron [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:02:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:26.687 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:02:26 compute-0 nova_compute[189381]: 2025-11-25 11:02:26.689 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:26.689 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:02:26 compute-0 podman[253581]: 2025-11-25 11:02:26.941948828 +0000 UTC m=+0.057016781 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.113 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:27.113 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:02:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:27.114 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.589 189385 DEBUG nova.network.neutron [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updating instance_info_cache with network_info: [{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.611 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Releasing lock "refresh_cache-c4d7af36-620f-46df-8347-4eaeed7856c6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.613 189385 DEBUG nova.compute.manager [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:02:27 compute-0 kernel: tap5a6cf231-3e (unregistering): left promiscuous mode
Nov 25 11:02:27 compute-0 NetworkManager[56317]: <info>  [1764068547.8919] device (tap5a6cf231-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.905 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:27 compute-0 ovn_controller[97779]: 2025-11-25T11:02:27Z|00109|binding|INFO|Releasing lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d from this chassis (sb_readonly=0)
Nov 25 11:02:27 compute-0 ovn_controller[97779]: 2025-11-25T11:02:27Z|00110|binding|INFO|Setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d down in Southbound
Nov 25 11:02:27 compute-0 ovn_controller[97779]: 2025-11-25T11:02:27Z|00111|binding|INFO|Removing iface tap5a6cf231-3e ovn-installed in OVS
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.907 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:27.935 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:2a 10.100.0.6'], port_security=['fa:16:3e:82:ff:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4d7af36-620f-46df-8347-4eaeed7856c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '826c484414ce4e89a03cf37f2359f956', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f94f5308-9585-46c9-858a-5bfd8b44a26c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e6d622-8d17-4306-9b9d-6c16ad078515, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=5a6cf231-3edc-4338-bb8e-74f0f7e6672d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:02:27 compute-0 nova_compute[189381]: 2025-11-25 11:02:27.937 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:27.936 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d in datapath 23ecff9c-5f66-4ace-9c23-23cc4a7533de unbound from our chassis
Nov 25 11:02:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:27.938 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23ecff9c-5f66-4ace-9c23-23cc4a7533de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:02:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:27.939 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8105f4b9-3cf6-4bd4-a874-64a980b6bc54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:27 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:27.939 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de namespace which is not needed anymore
Nov 25 11:02:27 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 25 11:02:27 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 37.151s CPU time.
Nov 25 11:02:27 compute-0 systemd-machined[155706]: Machine qemu-7-instance-00000007 terminated.
Nov 25 11:02:28 compute-0 kernel: tap5a6cf231-3e: entered promiscuous mode
Nov 25 11:02:28 compute-0 systemd-udevd[253606]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00112|binding|INFO|Claiming lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d for this chassis.
Nov 25 11:02:28 compute-0 kernel: tap5a6cf231-3e (unregistering): left promiscuous mode
Nov 25 11:02:28 compute-0 NetworkManager[56317]: <info>  [1764068548.0887] manager: (tap5a6cf231-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00113|binding|INFO|5a6cf231-3edc-4338-bb8e-74f0f7e6672d: Claiming fa:16:3e:82:ff:2a 10.100.0.6
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.089 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[252783]: [NOTICE]   (252787) : haproxy version is 2.8.14-c23fe91
Nov 25 11:02:28 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[252783]: [NOTICE]   (252787) : path to executable is /usr/sbin/haproxy
Nov 25 11:02:28 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[252783]: [WARNING]  (252787) : Exiting Master process...
Nov 25 11:02:28 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[252783]: [ALERT]    (252787) : Current worker (252789) exited with code 143 (Terminated)
Nov 25 11:02:28 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[252783]: [WARNING]  (252787) : All workers exited. Exiting... (0)
Nov 25 11:02:28 compute-0 systemd[1]: libpod-8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413.scope: Deactivated successfully.
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.117 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00114|binding|INFO|Setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d ovn-installed in OVS
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00115|if_status|INFO|Not setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d down as sb is readonly
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.122 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 podman[253627]: 2025-11-25 11:02:28.124843137 +0000 UTC m=+0.070835852 container died 8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00116|binding|INFO|Releasing lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d from this chassis (sb_readonly=0)
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.149 189385 INFO nova.virt.libvirt.driver [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance destroyed successfully.
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.150 189385 DEBUG nova.objects.instance [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'resources' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.151 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:2a 10.100.0.6'], port_security=['fa:16:3e:82:ff:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4d7af36-620f-46df-8347-4eaeed7856c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '826c484414ce4e89a03cf37f2359f956', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f94f5308-9585-46c9-858a-5bfd8b44a26c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e6d622-8d17-4306-9b9d-6c16ad078515, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=5a6cf231-3edc-4338-bb8e-74f0f7e6672d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.161 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:2a 10.100.0.6'], port_security=['fa:16:3e:82:ff:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4d7af36-620f-46df-8347-4eaeed7856c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '826c484414ce4e89a03cf37f2359f956', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f94f5308-9585-46c9-858a-5bfd8b44a26c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e6d622-8d17-4306-9b9d-6c16ad078515, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=5a6cf231-3edc-4338-bb8e-74f0f7e6672d) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413-userdata-shm.mount: Deactivated successfully.
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.164 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b09e34b21b152b6dfb102b0e2bd69f6ad690321fb7fa66d4e6c98c754c109a2-merged.mount: Deactivated successfully.
Nov 25 11:02:28 compute-0 podman[253627]: 2025-11-25 11:02:28.174504894 +0000 UTC m=+0.120497609 container cleanup 8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.182 189385 DEBUG nova.virt.libvirt.vif [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-529149042',display_name='tempest-ServerActionsTestJSON-server-529149042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-529149042',id=7,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzWJb9N1xKHRqheyAvQfzLJN/1EXZRkwEZB48VX8Av1lPssKsugB7RXaWiGMq0S+O13B7XTAT58mD2UKEKFp3RMSIDEcXXZEClMlcSxvJw62JrrIVelFsyCSZ1uD8LCvQ==',key_name='tempest-keypair-689374724',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:01:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='826c484414ce4e89a03cf37f2359f956',ramdisk_id='',reservation_id='r-g88p5309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-62183409',owner_user_name='tempest-ServerActionsTestJSON-62183409-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:02:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28101b622acc41c3aa3608e548b7ef96',uuid=c4d7af36-620f-46df-8347-4eaeed7856c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.182 189385 DEBUG nova.network.os_vif_util [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converting VIF {"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.183 189385 DEBUG nova.network.os_vif_util [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:02:28 compute-0 systemd[1]: libpod-conmon-8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413.scope: Deactivated successfully.
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.183 189385 DEBUG os_vif [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.184 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.185 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a6cf231-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.188 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.190 189385 INFO os_vif [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e')
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.196 189385 DEBUG nova.virt.libvirt.driver [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Start _get_guest_xml network_info=[{"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.202 189385 WARNING nova.virt.libvirt.driver [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.211 189385 DEBUG nova.virt.libvirt.host [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.211 189385 DEBUG nova.virt.libvirt.host [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.219 189385 DEBUG nova.virt.libvirt.host [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.220 189385 DEBUG nova.virt.libvirt.host [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.220 189385 DEBUG nova.virt.libvirt.driver [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.220 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.221 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.221 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.221 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.221 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.221 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.221 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.221 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.222 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.222 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.222 189385 DEBUG nova.virt.hardware [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.222 189385 DEBUG nova.objects.instance [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.240 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:28 compute-0 podman[253663]: 2025-11-25 11:02:28.260083431 +0000 UTC m=+0.055658652 container remove 8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.268 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8274e474-32d6-49fa-826b-d0e8909a644b]: (4, ('Tue Nov 25 11:02:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de (8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413)\n8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413\nTue Nov 25 11:02:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de (8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413)\n8730e092df113b275b3c805b66fbbb0607dd1f46fd01f74f1084850213ea7413\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.270 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[b6845c50-f375-49d7-96fb-97853948d5db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.272 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ecff9c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.275 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 kernel: tap23ecff9c-50: left promiscuous mode
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.284 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[dff0ba5d-2882-4376-80eb-a545300ff57d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.294 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.303 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[c291bde6-4308-4fa8-99bb-5ad597a3896f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.306 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[69e2abc0-c24d-4960-a387-1f799f1be4e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.309 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.config --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.311 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.311 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.312 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.313 189385 DEBUG nova.virt.libvirt.vif [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-529149042',display_name='tempest-ServerActionsTestJSON-server-529149042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-529149042',id=7,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzWJb9N1xKHRqheyAvQfzLJN/1EXZRkwEZB48VX8Av1lPssKsugB7RXaWiGMq0S+O13B7XTAT58mD2UKEKFp3RMSIDEcXXZEClMlcSxvJw62JrrIVelFsyCSZ1uD8LCvQ==',key_name='tempest-keypair-689374724',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:01:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='826c484414ce4e89a03cf37f2359f956',ramdisk_id='',reservation_id='r-g88p5309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-62183409',owner_user_name='tempest-ServerActionsTestJSON-62183409-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:02:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28101b622acc41c3aa3608e548b7ef96',uuid=c4d7af36-620f-46df-8347-4eaeed7856c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.314 189385 DEBUG nova.network.os_vif_util [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converting VIF {"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.315 189385 DEBUG nova.network.os_vif_util [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.316 189385 DEBUG nova.objects.instance [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'pci_devices' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.327 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0964efb4-9c05-4b46-806d-73accc5e6c66]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538862, 'reachable_time': 33641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253677, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.330 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:02:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d23ecff9c\x2d5f66\x2d4ace\x2d9c23\x2d23cc4a7533de.mount: Deactivated successfully.
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.330 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[9d209638-f1a3-463e-ae89-e906f46bf46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.330 189385 DEBUG nova.virt.libvirt.driver [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <uuid>c4d7af36-620f-46df-8347-4eaeed7856c6</uuid>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <name>instance-00000007</name>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <nova:name>tempest-ServerActionsTestJSON-server-529149042</nova:name>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:02:28</nova:creationTime>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:user uuid="28101b622acc41c3aa3608e548b7ef96">tempest-ServerActionsTestJSON-62183409-project-member</nova:user>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:project uuid="826c484414ce4e89a03cf37f2359f956">tempest-ServerActionsTestJSON-62183409</nova:project>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         <nova:port uuid="5a6cf231-3edc-4338-bb8e-74f0f7e6672d">
Nov 25 11:02:28 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <system>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <entry name="serial">c4d7af36-620f-46df-8347-4eaeed7856c6</entry>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <entry name="uuid">c4d7af36-620f-46df-8347-4eaeed7856c6</entry>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </system>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <os>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   </os>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <features>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   </features>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk.config"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:82:ff:2a"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <target dev="tap5a6cf231-3e"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/console.log" append="off"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <video>
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </video>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <input type="keyboard" bus="usb"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:02:28 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:02:28 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:02:28 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:02:28 compute-0 nova_compute[189381]: </domain>
Nov 25 11:02:28 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.331 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.332 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d in datapath 23ecff9c-5f66-4ace-9c23-23cc4a7533de unbound from our chassis
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.334 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23ecff9c-5f66-4ace-9c23-23cc4a7533de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.334 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cfdc3c-093e-4591-971d-3a8a60e36987]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.335 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d in datapath 23ecff9c-5f66-4ace-9c23-23cc4a7533de unbound from our chassis
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.336 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23ecff9c-5f66-4ace-9c23-23cc4a7533de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.337 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e616c1d2-3877-4a08-ad99-2db1b6b31692]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.401 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.402 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.432 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.465 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.466 189385 DEBUG nova.objects.instance [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.479 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.542 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.543 189385 DEBUG nova.virt.disk.api [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Checking if we can resize image /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.544 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.609 189385 DEBUG oslo_concurrency.processutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.610 189385 DEBUG nova.virt.disk.api [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Cannot resize image /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.610 189385 DEBUG nova.objects.instance [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'migration_context' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.624 189385 DEBUG nova.virt.libvirt.vif [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-529149042',display_name='tempest-ServerActionsTestJSON-server-529149042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-529149042',id=7,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzWJb9N1xKHRqheyAvQfzLJN/1EXZRkwEZB48VX8Av1lPssKsugB7RXaWiGMq0S+O13B7XTAT58mD2UKEKFp3RMSIDEcXXZEClMlcSxvJw62JrrIVelFsyCSZ1uD8LCvQ==',key_name='tempest-keypair-689374724',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:01:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='826c484414ce4e89a03cf37f2359f956',ramdisk_id='',reservation_id='r-g88p5309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-62183409',owner_user_name='tempest-ServerActionsTestJSON-62183409-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:02:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28101b622acc41c3aa3608e548b7ef96',uuid=c4d7af36-620f-46df-8347-4eaeed7856c6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.625 189385 DEBUG nova.network.os_vif_util [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converting VIF {"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.626 189385 DEBUG nova.network.os_vif_util [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.626 189385 DEBUG os_vif [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.626 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.627 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.627 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.630 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.630 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a6cf231-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.630 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a6cf231-3e, col_values=(('external_ids', {'iface-id': '5a6cf231-3edc-4338-bb8e-74f0f7e6672d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:ff:2a', 'vm-uuid': 'c4d7af36-620f-46df-8347-4eaeed7856c6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.632 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 NetworkManager[56317]: <info>  [1764068548.6331] manager: (tap5a6cf231-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.634 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.639 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.641 189385 INFO os_vif [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e')
Nov 25 11:02:28 compute-0 kernel: tap5a6cf231-3e: entered promiscuous mode
Nov 25 11:02:28 compute-0 NetworkManager[56317]: <info>  [1764068548.7235] manager: (tap5a6cf231-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00117|binding|INFO|Claiming lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d for this chassis.
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00118|binding|INFO|5a6cf231-3edc-4338-bb8e-74f0f7e6672d: Claiming fa:16:3e:82:ff:2a 10.100.0.6
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.724 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 NetworkManager[56317]: <info>  [1764068548.7391] device (tap5a6cf231-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.739 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00119|binding|INFO|Setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d ovn-installed in OVS
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.742 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 nova_compute[189381]: 2025-11-25 11:02:28.745 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:28 compute-0 NetworkManager[56317]: <info>  [1764068548.7461] device (tap5a6cf231-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:02:28 compute-0 systemd-machined[155706]: New machine qemu-10-instance-00000007.
Nov 25 11:02:28 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000007.
Nov 25 11:02:28 compute-0 ovn_controller[97779]: 2025-11-25T11:02:28Z|00120|binding|INFO|Setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d up in Southbound
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.964 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:2a 10.100.0.6'], port_security=['fa:16:3e:82:ff:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4d7af36-620f-46df-8347-4eaeed7856c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '826c484414ce4e89a03cf37f2359f956', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f94f5308-9585-46c9-858a-5bfd8b44a26c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e6d622-8d17-4306-9b9d-6c16ad078515, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=5a6cf231-3edc-4338-bb8e-74f0f7e6672d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.965 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d in datapath 23ecff9c-5f66-4ace-9c23-23cc4a7533de bound to our chassis
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.966 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 23ecff9c-5f66-4ace-9c23-23cc4a7533de
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.977 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea77135-ea69-455b-81f8-efdffde7e1b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.979 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap23ecff9c-51 in ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.983 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap23ecff9c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.984 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[1099334e-cbae-4f15-b7aa-461502895e70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.985 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6472f47a-b144-476e-a238-a6c17b04b268]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:28 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:28.997 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[ff13959a-dd6b-412f-9f6c-38b0fe9928dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.024 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[815ddce3-b524-44ee-ad28-fc57662908ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.053 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc7012b-2953-4672-9b2c-ba983f5a7d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 NetworkManager[56317]: <info>  [1764068549.0665] manager: (tap23ecff9c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.066 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[68f0b2ac-0712-4289-8583-7b912f911f92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.077 189385 DEBUG nova.objects.instance [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lazy-loading 'flavor' on Instance uuid 388d7cfb-c9e5-413a-9649-93e137294b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.100 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[967b570a-d27f-43b1-8a9f-deb15591ea8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.102 189385 DEBUG oslo_concurrency.lockutils [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.103 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[e8527717-1310-42a9-8524-b7e150379b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 NetworkManager[56317]: <info>  [1764068549.1273] device (tap23ecff9c-50): carrier: link connected
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.138 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9a676e-e3e2-4941-96bd-87763c6f5d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.145 189385 DEBUG nova.virt.libvirt.host [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Removed pending event for c4d7af36-620f-46df-8347-4eaeed7856c6 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.146 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068549.1430821, c4d7af36-620f-46df-8347-4eaeed7856c6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.146 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] VM Resumed (Lifecycle Event)
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.147 189385 DEBUG nova.compute.manager [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.153 189385 INFO nova.virt.libvirt.driver [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance rebooted successfully.
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.153 189385 DEBUG nova.compute.manager [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.158 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cc48c5-5dcb-4d26-b30f-329172db4d4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ecff9c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:aa:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545670, 'reachable_time': 27496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253746, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.173 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[774b35c6-4537-40c3-b1c6-85b236844739]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:aa0e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 545670, 'tstamp': 545670}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253747, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.185 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.190 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.191 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[23a6a401-7a0d-4ea9-8636-34beb6ea9a76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23ecff9c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:aa:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545670, 'reachable_time': 27496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253748, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.223 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb7a6da-41e3-4327-9129-e5dfbb8a75db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.285 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[090d0e13-abbb-4583-8b84-c17898d2096e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.286 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ecff9c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.287 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.288 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23ecff9c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:29 compute-0 NetworkManager[56317]: <info>  [1764068549.2908] manager: (tap23ecff9c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 25 11:02:29 compute-0 kernel: tap23ecff9c-50: entered promiscuous mode
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.292 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.301 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap23ecff9c-50, col_values=(('external_ids', {'iface-id': 'f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:29 compute-0 ovn_controller[97779]: 2025-11-25T11:02:29Z|00121|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.303 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.305 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.309 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/23ecff9c-5f66-4ace-9c23-23cc4a7533de.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/23ecff9c-5f66-4ace-9c23-23cc4a7533de.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.316 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.316 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068549.1432173, c4d7af36-620f-46df-8347-4eaeed7856c6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.317 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] VM Started (Lifecycle Event)
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.316 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[dabb9753-4eb2-43bb-8ab0-4e99217e578b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.321 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.324 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-23ecff9c-5f66-4ace-9c23-23cc4a7533de
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/23ecff9c-5f66-4ace-9c23-23cc4a7533de.pid.haproxy
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID 23ecff9c-5f66-4ace-9c23-23cc4a7533de
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.324 189385 DEBUG oslo_concurrency.lockutils [None req-d56203de-ed74-4a60-9d25-3a86fa14e0e2 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 9.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:29 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:29.327 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'env', 'PROCESS_TAG=haproxy-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/23ecff9c-5f66-4ace-9c23-23cc4a7533de.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.524 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:02:29 compute-0 nova_compute[189381]: 2025-11-25 11:02:29.529 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:02:29 compute-0 podman[253780]: 2025-11-25 11:02:29.717512547 +0000 UTC m=+0.056060694 container create bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 11:02:29 compute-0 podman[203557]: time="2025-11-25T11:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:02:29 compute-0 systemd[1]: Started libpod-conmon-bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4.scope.
Nov 25 11:02:29 compute-0 podman[253780]: 2025-11-25 11:02:29.690966698 +0000 UTC m=+0.029514865 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:02:29 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:02:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/762b7d4d6240c45da4ae05b63ed799c459be121a78a5096c2781c917f0c547ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:02:29 compute-0 podman[253780]: 2025-11-25 11:02:29.860347981 +0000 UTC m=+0.198896128 container init bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:02:29 compute-0 podman[253780]: 2025-11-25 11:02:29.872903244 +0000 UTC m=+0.211451391 container start bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:02:29 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[253795]: [NOTICE]   (253799) : New worker (253801) forked
Nov 25 11:02:29 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[253795]: [NOTICE]   (253799) : Loading success.
Nov 25 11:02:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30753 "" "Go-http-client/1.1"
Nov 25 11:02:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5261 "" "Go-http-client/1.1"
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.057 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.058 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.155 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.217 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.218 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.280 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.291 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.356 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.357 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.423 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.634 189385 DEBUG nova.network.neutron [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updated VIF entry in instance network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.635 189385 DEBUG nova.network.neutron [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.662 189385 DEBUG oslo_concurrency.lockutils [req-e5510b59-18ab-4646-bd29-72f9fe88cab8 req-8a797f80-edf5-4562-b041-b6e6ae967ec6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.663 189385 DEBUG oslo_concurrency.lockutils [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.813 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.814 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5059MB free_disk=72.10590362548828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.814 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.814 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.910 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance c4d7af36-620f-46df-8347-4eaeed7856c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.910 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 388d7cfb-c9e5-413a-9649-93e137294b38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.910 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:02:30 compute-0 nova_compute[189381]: 2025-11-25 11:02:30.911 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:02:31 compute-0 nova_compute[189381]: 2025-11-25 11:02:31.019 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:02:31 compute-0 nova_compute[189381]: 2025-11-25 11:02:31.036 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:02:31 compute-0 nova_compute[189381]: 2025-11-25 11:02:31.071 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:02:31 compute-0 nova_compute[189381]: 2025-11-25 11:02:31.071 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:31 compute-0 openstack_network_exporter[205722]: ERROR   11:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:02:31 compute-0 openstack_network_exporter[205722]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:02:31 compute-0 openstack_network_exporter[205722]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:02:31 compute-0 openstack_network_exporter[205722]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:02:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:02:31 compute-0 openstack_network_exporter[205722]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:02:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:02:32 compute-0 nova_compute[189381]: 2025-11-25 11:02:32.072 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:32 compute-0 nova_compute[189381]: 2025-11-25 11:02:32.073 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:02:32 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:32.116 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:32 compute-0 nova_compute[189381]: 2025-11-25 11:02:32.451 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:02:32 compute-0 nova_compute[189381]: 2025-11-25 11:02:32.577 189385 DEBUG nova.network.neutron [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:02:32 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:32.691 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:32 compute-0 nova_compute[189381]: 2025-11-25 11:02:32.739 189385 DEBUG nova.compute.manager [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:32 compute-0 nova_compute[189381]: 2025-11-25 11:02:32.740 189385 DEBUG nova.compute.manager [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing instance network info cache due to event network-changed-c0d318cc-f546-4bbc-aebc-f0c185dff8aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:02:32 compute-0 nova_compute[189381]: 2025-11-25 11:02:32.740 189385 DEBUG oslo_concurrency.lockutils [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:02:33 compute-0 nova_compute[189381]: 2025-11-25 11:02:33.435 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:33 compute-0 nova_compute[189381]: 2025-11-25 11:02:33.633 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:35 compute-0 nova_compute[189381]: 2025-11-25 11:02:35.509 189385 DEBUG nova.network.neutron [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:35 compute-0 nova_compute[189381]: 2025-11-25 11:02:35.710 189385 DEBUG oslo_concurrency.lockutils [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:02:35 compute-0 nova_compute[189381]: 2025-11-25 11:02:35.712 189385 DEBUG nova.compute.manager [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Nov 25 11:02:35 compute-0 nova_compute[189381]: 2025-11-25 11:02:35.712 189385 DEBUG nova.compute.manager [None req-2f98a944-da50-49c3-952f-749df8314606 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] network_info to inject: |[{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Nov 25 11:02:35 compute-0 nova_compute[189381]: 2025-11-25 11:02:35.716 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:02:35 compute-0 nova_compute[189381]: 2025-11-25 11:02:35.717 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:02:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:36.068 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:36.071 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:36.074 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:36 compute-0 nova_compute[189381]: 2025-11-25 11:02:36.577 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:36 compute-0 nova_compute[189381]: 2025-11-25 11:02:36.735 189385 DEBUG nova.compute.manager [req-324ae8c5-d05e-41f7-88d7-c27dca2715cc req-f57dfdbc-2a88-4844-a3b4-28e496b47e05 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-vif-unplugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:36 compute-0 nova_compute[189381]: 2025-11-25 11:02:36.735 189385 DEBUG oslo_concurrency.lockutils [req-324ae8c5-d05e-41f7-88d7-c27dca2715cc req-f57dfdbc-2a88-4844-a3b4-28e496b47e05 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:36 compute-0 nova_compute[189381]: 2025-11-25 11:02:36.737 189385 DEBUG oslo_concurrency.lockutils [req-324ae8c5-d05e-41f7-88d7-c27dca2715cc req-f57dfdbc-2a88-4844-a3b4-28e496b47e05 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:36 compute-0 nova_compute[189381]: 2025-11-25 11:02:36.737 189385 DEBUG oslo_concurrency.lockutils [req-324ae8c5-d05e-41f7-88d7-c27dca2715cc req-f57dfdbc-2a88-4844-a3b4-28e496b47e05 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:36 compute-0 nova_compute[189381]: 2025-11-25 11:02:36.737 189385 DEBUG nova.compute.manager [req-324ae8c5-d05e-41f7-88d7-c27dca2715cc req-f57dfdbc-2a88-4844-a3b4-28e496b47e05 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] No waiting events found dispatching network-vif-unplugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:02:36 compute-0 nova_compute[189381]: 2025-11-25 11:02:36.737 189385 WARNING nova.compute.manager [req-324ae8c5-d05e-41f7-88d7-c27dca2715cc req-f57dfdbc-2a88-4844-a3b4-28e496b47e05 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received unexpected event network-vif-unplugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d for instance with vm_state active and task_state None.
Nov 25 11:02:36 compute-0 podman[253825]: 2025-11-25 11:02:36.967050964 +0000 UTC m=+0.069924405 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:02:36 compute-0 podman[253824]: 2025-11-25 11:02:36.977077214 +0000 UTC m=+0.083911700 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.172 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.173 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.173 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.174 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.174 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.176 189385 INFO nova.compute.manager [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Terminating instance
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.177 189385 DEBUG nova.compute.manager [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:02:37 compute-0 kernel: tapc0d318cc-f5 (unregistering): left promiscuous mode
Nov 25 11:02:37 compute-0 NetworkManager[56317]: <info>  [1764068557.2186] device (tapc0d318cc-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:02:37 compute-0 ovn_controller[97779]: 2025-11-25T11:02:37Z|00122|binding|INFO|Releasing lport c0d318cc-f546-4bbc-aebc-f0c185dff8aa from this chassis (sb_readonly=0)
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.230 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:37 compute-0 ovn_controller[97779]: 2025-11-25T11:02:37Z|00123|binding|INFO|Setting lport c0d318cc-f546-4bbc-aebc-f0c185dff8aa down in Southbound
Nov 25 11:02:37 compute-0 ovn_controller[97779]: 2025-11-25T11:02:37Z|00124|binding|INFO|Removing iface tapc0d318cc-f5 ovn-installed in OVS
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.234 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.252 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:37 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 25 11:02:37 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 40.275s CPU time.
Nov 25 11:02:37 compute-0 systemd-machined[155706]: Machine qemu-8-instance-00000008 terminated.
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.290 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:e1:52 10.100.0.14'], port_security=['fa:16:3e:92:e1:52 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '388d7cfb-c9e5-413a-9649-93e137294b38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aab9dbacd4e342dc8dba92c598ab985b', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8604b340-fad6-470f-ae73-7809d51611ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=159a6a68-a039-46f1-aa18-f4c9b1633455, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=c0d318cc-f546-4bbc-aebc-f0c185dff8aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.292 106634 INFO neutron.agent.ovn.metadata.agent [-] Port c0d318cc-f546-4bbc-aebc-f0c185dff8aa in datapath 2fd87850-667e-4c51-ba0e-fa79b8cba493 unbound from our chassis
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.294 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2fd87850-667e-4c51-ba0e-fa79b8cba493, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.295 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[584e3ba0-9a41-4f37-b426-708b416070fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.295 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493 namespace which is not needed anymore
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.459 189385 INFO nova.virt.libvirt.driver [-] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Instance destroyed successfully.
Nov 25 11:02:37 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [NOTICE]   (252955) : haproxy version is 2.8.14-c23fe91
Nov 25 11:02:37 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [NOTICE]   (252955) : path to executable is /usr/sbin/haproxy
Nov 25 11:02:37 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [WARNING]  (252955) : Exiting Master process...
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.462 189385 DEBUG nova.objects.instance [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lazy-loading 'resources' on Instance uuid 388d7cfb-c9e5-413a-9649-93e137294b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:02:37 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [WARNING]  (252955) : Exiting Master process...
Nov 25 11:02:37 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [ALERT]    (252955) : Current worker (252957) exited with code 143 (Terminated)
Nov 25 11:02:37 compute-0 neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493[252951]: [WARNING]  (252955) : All workers exited. Exiting... (0)
Nov 25 11:02:37 compute-0 systemd[1]: libpod-f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a.scope: Deactivated successfully.
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.474 189385 DEBUG nova.virt.libvirt.vif [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:01:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-2107609661',display_name='tempest-AttachInterfacesUnderV243Test-server-2107609661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-2107609661',id=8,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNtO0HjgWiM0JSycO6jGf2/nAZhrR5B9RHoKEiCWRqTQ2ZEGJWpoGM2BnIEFm5FDR+Uhh3GbUmTBAMlbuu2npur0QUHXfwQUDwLTXRSY2Cr00b6N3oiGImBs0AlIIVa26g==',key_name='tempest-keypair-223894159',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:01:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aab9dbacd4e342dc8dba92c598ab985b',ramdisk_id='',reservation_id='r-ufa2json',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-2133702226',owner_user_name='tempest-AttachInterfacesUnderV243Test-2133702226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:02:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2c4b9fe3a6ed4ac6a15a5f331dbe9842',uuid=388d7cfb-c9e5-413a-9649-93e137294b38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.475 189385 DEBUG nova.network.os_vif_util [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Converting VIF {"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.476 189385 DEBUG nova.network.os_vif_util [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:92:e1:52,bridge_name='br-int',has_traffic_filtering=True,id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa,network=Network(2fd87850-667e-4c51-ba0e-fa79b8cba493),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0d318cc-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.477 189385 DEBUG os_vif [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:e1:52,bridge_name='br-int',has_traffic_filtering=True,id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa,network=Network(2fd87850-667e-4c51-ba0e-fa79b8cba493),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0d318cc-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.479 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:37 compute-0 podman[253881]: 2025-11-25 11:02:37.480018851 +0000 UTC m=+0.081848039 container died f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.480 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0d318cc-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.483 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.485 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.487 189385 INFO os_vif [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:e1:52,bridge_name='br-int',has_traffic_filtering=True,id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa,network=Network(2fd87850-667e-4c51-ba0e-fa79b8cba493),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0d318cc-f5')
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.488 189385 INFO nova.virt.libvirt.driver [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Deleting instance files /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38_del
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.489 189385 INFO nova.virt.libvirt.driver [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Deletion of /var/lib/nova/instances/388d7cfb-c9e5-413a-9649-93e137294b38_del complete
Nov 25 11:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a-userdata-shm.mount: Deactivated successfully.
Nov 25 11:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-87cb3769e252a90ffbff6e2e214edd3f354a1469d94cc20104ead5191d5b6a2b-merged.mount: Deactivated successfully.
Nov 25 11:02:37 compute-0 podman[253881]: 2025-11-25 11:02:37.526372903 +0000 UTC m=+0.128202091 container cleanup f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:02:37 compute-0 systemd[1]: libpod-conmon-f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a.scope: Deactivated successfully.
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.594 189385 INFO nova.compute.manager [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Took 0.42 seconds to destroy the instance on the hypervisor.
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.596 189385 DEBUG oslo.service.loopingcall [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.597 189385 DEBUG nova.compute.manager [-] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.597 189385 DEBUG nova.network.neutron [-] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:02:37 compute-0 podman[253925]: 2025-11-25 11:02:37.619302123 +0000 UTC m=+0.066864077 container remove f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.627 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[4c076242-434c-48c2-8699-5dec889f7539]: (4, ('Tue Nov 25 11:02:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493 (f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a)\nf9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a\nTue Nov 25 11:02:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493 (f9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a)\nf9cc5c3dd383a80276edf00120e4d9d00912e04924f175cfca13506e81554d8a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.630 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[a759dd93-cd71-49b7-b9ea-733fddfa1604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.634 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fd87850-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:02:37 compute-0 kernel: tap2fd87850-60: left promiscuous mode
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.645 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:37 compute-0 nova_compute[189381]: 2025-11-25 11:02:37.650 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.653 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8a102520-6a23-40af-99d7-86d3a8f6e0ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.670 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[cb86aaa2-b6eb-41a2-99bb-cf98c087f055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.671 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[b0564fa0-3e8c-4a07-966a-68a44b5ca055]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.687 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8937a670-1021-486a-8090-4be3b652f66e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 539048, 'reachable_time': 39100, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253939, 'error': None, 'target': 'ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d2fd87850\x2d667e\x2d4c51\x2dba0e\x2dfa79b8cba493.mount: Deactivated successfully.
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.691 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2fd87850-667e-4c51-ba0e-fa79b8cba493 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:02:37 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:02:37.691 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b4ec1f-d6e8-4740-b1af-02d66913898a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.129 189385 DEBUG nova.compute.manager [req-8fc5effc-eee3-4c10-ae9f-f15a0999be90 req-89cddc1e-59c1-4eff-98e7-ba402dbcab58 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-vif-unplugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.129 189385 DEBUG oslo_concurrency.lockutils [req-8fc5effc-eee3-4c10-ae9f-f15a0999be90 req-89cddc1e-59c1-4eff-98e7-ba402dbcab58 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.130 189385 DEBUG oslo_concurrency.lockutils [req-8fc5effc-eee3-4c10-ae9f-f15a0999be90 req-89cddc1e-59c1-4eff-98e7-ba402dbcab58 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.130 189385 DEBUG oslo_concurrency.lockutils [req-8fc5effc-eee3-4c10-ae9f-f15a0999be90 req-89cddc1e-59c1-4eff-98e7-ba402dbcab58 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.130 189385 DEBUG nova.compute.manager [req-8fc5effc-eee3-4c10-ae9f-f15a0999be90 req-89cddc1e-59c1-4eff-98e7-ba402dbcab58 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] No waiting events found dispatching network-vif-unplugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.131 189385 DEBUG nova.compute.manager [req-8fc5effc-eee3-4c10-ae9f-f15a0999be90 req-89cddc1e-59c1-4eff-98e7-ba402dbcab58 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-vif-unplugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.438 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.857 189385 DEBUG nova.compute.manager [req-8469652d-50a6-4f7a-b818-7bb853094178 req-936288b7-5e44-4e9a-9173-30896ba3bc57 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.858 189385 DEBUG oslo_concurrency.lockutils [req-8469652d-50a6-4f7a-b818-7bb853094178 req-936288b7-5e44-4e9a-9173-30896ba3bc57 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.858 189385 DEBUG oslo_concurrency.lockutils [req-8469652d-50a6-4f7a-b818-7bb853094178 req-936288b7-5e44-4e9a-9173-30896ba3bc57 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.858 189385 DEBUG oslo_concurrency.lockutils [req-8469652d-50a6-4f7a-b818-7bb853094178 req-936288b7-5e44-4e9a-9173-30896ba3bc57 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.859 189385 DEBUG nova.compute.manager [req-8469652d-50a6-4f7a-b818-7bb853094178 req-936288b7-5e44-4e9a-9173-30896ba3bc57 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] No waiting events found dispatching network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:02:38 compute-0 nova_compute[189381]: 2025-11-25 11:02:38.859 189385 WARNING nova.compute.manager [req-8469652d-50a6-4f7a-b818-7bb853094178 req-936288b7-5e44-4e9a-9173-30896ba3bc57 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received unexpected event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d for instance with vm_state active and task_state None.
Nov 25 11:02:39 compute-0 nova_compute[189381]: 2025-11-25 11:02:39.989 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.019 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.020 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.021 189385 DEBUG oslo_concurrency.lockutils [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.022 189385 DEBUG nova.network.neutron [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Refreshing network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.024 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.025 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.026 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.026 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.243 189385 DEBUG nova.compute.manager [req-575dbbb8-5c13-4145-aea4-3cc4d1c311e7 req-c555b1dd-a08c-4bf0-ba36-b1ecd60473e3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.244 189385 DEBUG oslo_concurrency.lockutils [req-575dbbb8-5c13-4145-aea4-3cc4d1c311e7 req-c555b1dd-a08c-4bf0-ba36-b1ecd60473e3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.244 189385 DEBUG oslo_concurrency.lockutils [req-575dbbb8-5c13-4145-aea4-3cc4d1c311e7 req-c555b1dd-a08c-4bf0-ba36-b1ecd60473e3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.244 189385 DEBUG oslo_concurrency.lockutils [req-575dbbb8-5c13-4145-aea4-3cc4d1c311e7 req-c555b1dd-a08c-4bf0-ba36-b1ecd60473e3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.244 189385 DEBUG nova.compute.manager [req-575dbbb8-5c13-4145-aea4-3cc4d1c311e7 req-c555b1dd-a08c-4bf0-ba36-b1ecd60473e3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] No waiting events found dispatching network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.244 189385 WARNING nova.compute.manager [req-575dbbb8-5c13-4145-aea4-3cc4d1c311e7 req-c555b1dd-a08c-4bf0-ba36-b1ecd60473e3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received unexpected event network-vif-plugged-c0d318cc-f546-4bbc-aebc-f0c185dff8aa for instance with vm_state active and task_state deleting.
Nov 25 11:02:40 compute-0 podman[253940]: 2025-11-25 11:02:40.966299393 +0000 UTC m=+0.082759707 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler)
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.975 189385 DEBUG nova.compute.manager [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.975 189385 DEBUG oslo_concurrency.lockutils [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.975 189385 DEBUG oslo_concurrency.lockutils [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.976 189385 DEBUG oslo_concurrency.lockutils [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.976 189385 DEBUG nova.compute.manager [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] No waiting events found dispatching network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.976 189385 WARNING nova.compute.manager [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received unexpected event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d for instance with vm_state active and task_state None.
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.976 189385 DEBUG nova.compute.manager [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.976 189385 DEBUG oslo_concurrency.lockutils [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.976 189385 DEBUG oslo_concurrency.lockutils [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.977 189385 DEBUG oslo_concurrency.lockutils [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.977 189385 DEBUG nova.compute.manager [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] No waiting events found dispatching network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:02:40 compute-0 nova_compute[189381]: 2025-11-25 11:02:40.977 189385 WARNING nova.compute.manager [req-85a58666-8da0-4c83-a982-d3a3b577969d req-4101315f-661c-42da-96e3-d2b5840a030d d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received unexpected event network-vif-plugged-5a6cf231-3edc-4338-bb8e-74f0f7e6672d for instance with vm_state active and task_state None.
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.484 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.741 189385 DEBUG nova.network.neutron [-] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.764 189385 DEBUG nova.compute.manager [req-d3ce424b-c379-4e30-b515-7bc3fe5dba61 req-414850d4-925d-48e8-9083-b9830dae812c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Received event network-vif-deleted-c0d318cc-f546-4bbc-aebc-f0c185dff8aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.765 189385 INFO nova.compute.manager [req-d3ce424b-c379-4e30-b515-7bc3fe5dba61 req-414850d4-925d-48e8-9083-b9830dae812c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Neutron deleted interface c0d318cc-f546-4bbc-aebc-f0c185dff8aa; detaching it from the instance and deleting it from the info cache
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.765 189385 DEBUG nova.network.neutron [req-d3ce424b-c379-4e30-b515-7bc3fe5dba61 req-414850d4-925d-48e8-9083-b9830dae812c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.779 189385 INFO nova.compute.manager [-] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Took 5.18 seconds to deallocate network for instance.
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.790 189385 DEBUG nova.compute.manager [req-d3ce424b-c379-4e30-b515-7bc3fe5dba61 req-414850d4-925d-48e8-9083-b9830dae812c d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Detach interface failed, port_id=c0d318cc-f546-4bbc-aebc-f0c185dff8aa, reason: Instance 388d7cfb-c9e5-413a-9649-93e137294b38 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.824 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.825 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.921 189385 DEBUG nova.compute.provider_tree [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.938 189385 DEBUG nova.scheduler.client.report [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.965 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:42 compute-0 nova_compute[189381]: 2025-11-25 11:02:42.969 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:43 compute-0 nova_compute[189381]: 2025-11-25 11:02:43.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:43 compute-0 nova_compute[189381]: 2025-11-25 11:02:43.027 189385 INFO nova.scheduler.client.report [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Deleted allocations for instance 388d7cfb-c9e5-413a-9649-93e137294b38
Nov 25 11:02:43 compute-0 nova_compute[189381]: 2025-11-25 11:02:43.112 189385 DEBUG oslo_concurrency.lockutils [None req-b2b6498e-6d27-4186-9b75-d3a5b7821cad 2c4b9fe3a6ed4ac6a15a5f331dbe9842 aab9dbacd4e342dc8dba92c598ab985b - - default default] Lock "388d7cfb-c9e5-413a-9649-93e137294b38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:02:43 compute-0 nova_compute[189381]: 2025-11-25 11:02:43.441 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:44 compute-0 nova_compute[189381]: 2025-11-25 11:02:44.047 189385 DEBUG nova.network.neutron [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updated VIF entry in instance network info cache for port c0d318cc-f546-4bbc-aebc-f0c185dff8aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:02:44 compute-0 nova_compute[189381]: 2025-11-25 11:02:44.048 189385 DEBUG nova.network.neutron [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Updating instance_info_cache with network_info: [{"id": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "address": "fa:16:3e:92:e1:52", "network": {"id": "2fd87850-667e-4c51-ba0e-fa79b8cba493", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1233520272-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aab9dbacd4e342dc8dba92c598ab985b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0d318cc-f5", "ovs_interfaceid": "c0d318cc-f546-4bbc-aebc-f0c185dff8aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:02:44 compute-0 nova_compute[189381]: 2025-11-25 11:02:44.069 189385 DEBUG oslo_concurrency.lockutils [req-a0367779-fc38-4258-867e-edb06453ea8e req-9c5da50c-104c-4af5-8fa2-832a9b136964 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-388d7cfb-c9e5-413a-9649-93e137294b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:02:45 compute-0 podman[253960]: 2025-11-25 11:02:45.980933881 +0000 UTC m=+0.094647661 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:02:47 compute-0 nova_compute[189381]: 2025-11-25 11:02:47.488 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:48 compute-0 nova_compute[189381]: 2025-11-25 11:02:48.444 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:49 compute-0 podman[253981]: 2025-11-25 11:02:49.957715909 +0000 UTC m=+0.067484795 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 11:02:49 compute-0 podman[253980]: 2025-11-25 11:02:49.983780563 +0000 UTC m=+0.097464672 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350)
Nov 25 11:02:52 compute-0 nova_compute[189381]: 2025-11-25 11:02:52.453 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068557.4513917, 388d7cfb-c9e5-413a-9649-93e137294b38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:02:52 compute-0 nova_compute[189381]: 2025-11-25 11:02:52.454 189385 INFO nova.compute.manager [-] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] VM Stopped (Lifecycle Event)
Nov 25 11:02:52 compute-0 nova_compute[189381]: 2025-11-25 11:02:52.485 189385 DEBUG nova.compute.manager [None req-4cafeb5d-3774-49d9-8053-ad4bfbebeec7 - - - - - -] [instance: 388d7cfb-c9e5-413a-9649-93e137294b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:02:52 compute-0 nova_compute[189381]: 2025-11-25 11:02:52.492 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:52 compute-0 nova_compute[189381]: 2025-11-25 11:02:52.660 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:53 compute-0 nova_compute[189381]: 2025-11-25 11:02:53.445 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:53 compute-0 podman[254022]: 2025-11-25 11:02:53.958463091 +0000 UTC m=+0.069053200 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:02:53 compute-0 podman[254021]: 2025-11-25 11:02:53.98505775 +0000 UTC m=+0.100219132 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:02:54 compute-0 nova_compute[189381]: 2025-11-25 11:02:54.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:02:54 compute-0 ovn_controller[97779]: 2025-11-25T11:02:54Z|00125|binding|INFO|Releasing lport f7c4b000-bc8d-471b-bc5d-bc70f92cc1c7 from this chassis (sb_readonly=0)
Nov 25 11:02:54 compute-0 nova_compute[189381]: 2025-11-25 11:02:54.737 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:57 compute-0 nova_compute[189381]: 2025-11-25 11:02:57.494 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:57 compute-0 podman[254065]: 2025-11-25 11:02:57.945909407 +0000 UTC m=+0.059690969 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 11:02:58 compute-0 nova_compute[189381]: 2025-11-25 11:02:58.448 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:02:59 compute-0 podman[203557]: time="2025-11-25T11:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:02:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:02:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Nov 25 11:03:01 compute-0 openstack_network_exporter[205722]: ERROR   11:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:03:01 compute-0 openstack_network_exporter[205722]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:03:01 compute-0 openstack_network_exporter[205722]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:03:01 compute-0 openstack_network_exporter[205722]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:03:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:03:01 compute-0 openstack_network_exporter[205722]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:03:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:03:02 compute-0 nova_compute[189381]: 2025-11-25 11:03:02.499 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.340 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.342 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.348 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance c4d7af36-620f-46df-8347-4eaeed7856c6 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.349 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/c4d7af36-620f-46df-8347-4eaeed7856c6 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:03:03 compute-0 nova_compute[189381]: 2025-11-25 11:03:03.450 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.307 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1979 Content-Type: application/json Date: Tue, 25 Nov 2025 11:03:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c3bbddae-45bb-4290-a83f-0ef6d9e25a79 x-openstack-request-id: req-c3bbddae-45bb-4290-a83f-0ef6d9e25a79 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.307 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "c4d7af36-620f-46df-8347-4eaeed7856c6", "name": "tempest-ServerActionsTestJSON-server-529149042", "status": "ACTIVE", "tenant_id": "826c484414ce4e89a03cf37f2359f956", "user_id": "28101b622acc41c3aa3608e548b7ef96", "metadata": {}, "hostId": "280a8afe3eb8d317916570d4c9d68aabb8a681868dc06019eab04da4", "image": {"id": "b388f0fb-bd04-4296-928b-44c706e0493e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b388f0fb-bd04-4296-928b-44c706e0493e"}]}, "flavor": {"id": "b7c0626e-febc-4083-b621-6f5ee0740a18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b7c0626e-febc-4083-b621-6f5ee0740a18"}]}, "created": "2025-11-25T11:01:06Z", "updated": "2025-11-25T11:02:29Z", "addresses": {"tempest-ServerActionsTestJSON-1257722246-network": [{"version": 4, "addr": "10.100.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:82:ff:2a"}, {"version": 4, "addr": "192.168.122.210", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:82:ff:2a"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/c4d7af36-620f-46df-8347-4eaeed7856c6"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/c4d7af36-620f-46df-8347-4eaeed7856c6"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-689374724", "OS-SRV-USG:launched_at": "2025-11-25T11:01:25.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1305445087"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000007", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.308 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/c4d7af36-620f-46df-8347-4eaeed7856c6 used request id req-c3bbddae-45bb-4290-a83f-0ef6d9e25a79 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.310 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'c4d7af36-620f-46df-8347-4eaeed7856c6', 'name': 'tempest-ServerActionsTestJSON-server-529149042', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '826c484414ce4e89a03cf37f2359f956', 'user_id': '28101b622acc41c3aa3608e548b7ef96', 'hostId': '280a8afe3eb8d317916570d4c9d68aabb8a681868dc06019eab04da4', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:03:04.310876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.316 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for c4d7af36-620f-46df-8347-4eaeed7856c6 / tap5a6cf231-3e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.317 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.317 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.318 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.320 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:03:04.318159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:03:04.320903) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.346 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/memory.usage volume: 40.46875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.348 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.349 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.349 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-529149042>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-529149042>]
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.351 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.351 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T11:03:04.349161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.352 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:03:04.351947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.353 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.354 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:03:04.353893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.355 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:03:04.355494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.355 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.356 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.357 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.357 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.358 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/cpu volume: 33880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:03:04.357729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.359 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.360 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:03:04.360211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:03:04.362374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.384 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.385 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.386 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:03:04.387395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.424 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.read.bytes volume: 28530688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.425 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.read.bytes volume: 119062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:03:04.427894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.428 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.read.latency volume: 858224762 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.428 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.read.latency volume: 27771956 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.430 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:03:04.431550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.432 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.read.requests volume: 1037 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.432 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.read.requests volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:03:04.435424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.435 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.436 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.438 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:03:04.438787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.439 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.write.bytes volume: 147456 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.439 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:03:04.442143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.442 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.write.latency volume: 21789152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.443 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.444 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.445 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:03:04.444851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.445 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.445 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.446 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.446 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.write.requests volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.446 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.447 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:03:04.446161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.447 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:03:04.448342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.448 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.449 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.450 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.allocation volume: 30482432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:03:04.450449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.451 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.452 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T11:03:04.453064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.453 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.453 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-529149042>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-529149042>]
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.454 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.455 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:03:04.455308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.456 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.457 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:03:04.457194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:03:04.459352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.459 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.460 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:03:04.461221) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.461 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.462 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:03:04.463212) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.463 14 DEBUG ceilometer.compute.pollsters [-] c4d7af36-620f-46df-8347-4eaeed7856c6/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:03:04.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:03:04 compute-0 nova_compute[189381]: 2025-11-25 11:03:04.709 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:06 compute-0 ovn_controller[97779]: 2025-11-25T11:03:06Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:ff:2a 10.100.0.6
Nov 25 11:03:07 compute-0 nova_compute[189381]: 2025-11-25 11:03:07.502 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:07 compute-0 podman[254098]: 2025-11-25 11:03:07.958726864 +0000 UTC m=+0.074342993 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.build-date=20251118, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Nov 25 11:03:07 compute-0 podman[254099]: 2025-11-25 11:03:07.958952411 +0000 UTC m=+0.070035388 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 11:03:08 compute-0 nova_compute[189381]: 2025-11-25 11:03:08.453 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:11 compute-0 podman[254135]: 2025-11-25 11:03:11.962067549 +0000 UTC m=+0.071855671 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.buildah.version=1.29.0)
Nov 25 11:03:12 compute-0 nova_compute[189381]: 2025-11-25 11:03:12.506 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:13 compute-0 nova_compute[189381]: 2025-11-25 11:03:13.456 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:16 compute-0 podman[254154]: 2025-11-25 11:03:16.952738541 +0000 UTC m=+0.060440081 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 25 11:03:17 compute-0 nova_compute[189381]: 2025-11-25 11:03:17.510 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:18 compute-0 nova_compute[189381]: 2025-11-25 11:03:18.459 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:20 compute-0 podman[254173]: 2025-11-25 11:03:20.988838605 +0000 UTC m=+0.094839066 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:03:20 compute-0 podman[254172]: 2025-11-25 11:03:20.989418082 +0000 UTC m=+0.100203311 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41)
Nov 25 11:03:22 compute-0 nova_compute[189381]: 2025-11-25 11:03:22.513 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:23 compute-0 nova_compute[189381]: 2025-11-25 11:03:23.462 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:24 compute-0 podman[254217]: 2025-11-25 11:03:24.965489218 +0000 UTC m=+0.072060327 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 25 11:03:25 compute-0 podman[254216]: 2025-11-25 11:03:25.00389198 +0000 UTC m=+0.114796144 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:03:26 compute-0 nova_compute[189381]: 2025-11-25 11:03:26.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:27 compute-0 nova_compute[189381]: 2025-11-25 11:03:27.515 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:28 compute-0 nova_compute[189381]: 2025-11-25 11:03:28.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:28 compute-0 nova_compute[189381]: 2025-11-25 11:03:28.463 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:28 compute-0 podman[254260]: 2025-11-25 11:03:28.937860718 +0000 UTC m=+0.053433378 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:03:29 compute-0 podman[203557]: time="2025-11-25T11:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:03:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:03:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 25 11:03:30 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:30.346 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:03:30 compute-0 nova_compute[189381]: 2025-11-25 11:03:30.347 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:30 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:30.348 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:03:31 compute-0 openstack_network_exporter[205722]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:03:31 compute-0 openstack_network_exporter[205722]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:03:31 compute-0 openstack_network_exporter[205722]: ERROR   11:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:03:31 compute-0 openstack_network_exporter[205722]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:03:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:03:31 compute-0 openstack_network_exporter[205722]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:03:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.062 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.063 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.063 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.089 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.090 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.090 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.091 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.172 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.231 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.232 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.296 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.518 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.610 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.612 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5173MB free_disk=72.13461685180664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.612 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.612 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.709 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance c4d7af36-620f-46df-8347-4eaeed7856c6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.709 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.710 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:03:32 compute-0 nova_compute[189381]: 2025-11-25 11:03:32.727 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.059 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.060 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.078 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.102 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.146 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.174 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.300 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.301 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:33 compute-0 nova_compute[189381]: 2025-11-25 11:03:33.466 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:34 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:34.349 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:03:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:36.069 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:36.069 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:36.070 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:36 compute-0 nova_compute[189381]: 2025-11-25 11:03:36.888 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:37 compute-0 nova_compute[189381]: 2025-11-25 11:03:37.258 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:37 compute-0 nova_compute[189381]: 2025-11-25 11:03:37.259 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:37 compute-0 nova_compute[189381]: 2025-11-25 11:03:37.523 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:38 compute-0 nova_compute[189381]: 2025-11-25 11:03:38.478 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:38 compute-0 podman[254290]: 2025-11-25 11:03:38.954425804 +0000 UTC m=+0.069718989 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 11:03:38 compute-0 podman[254291]: 2025-11-25 11:03:38.982511587 +0000 UTC m=+0.094172377 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.423 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.424 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.424 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.424 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.425 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.426 189385 INFO nova.compute.manager [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Terminating instance
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.427 189385 DEBUG nova.compute.manager [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:03:41 compute-0 kernel: tap5a6cf231-3e (unregistering): left promiscuous mode
Nov 25 11:03:41 compute-0 NetworkManager[56317]: <info>  [1764068621.4590] device (tap5a6cf231-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:03:41 compute-0 ovn_controller[97779]: 2025-11-25T11:03:41Z|00126|binding|INFO|Releasing lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d from this chassis (sb_readonly=0)
Nov 25 11:03:41 compute-0 ovn_controller[97779]: 2025-11-25T11:03:41Z|00127|binding|INFO|Setting lport 5a6cf231-3edc-4338-bb8e-74f0f7e6672d down in Southbound
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.466 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 ovn_controller[97779]: 2025-11-25T11:03:41Z|00128|binding|INFO|Removing iface tap5a6cf231-3e ovn-installed in OVS
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.476 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.490 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 25 11:03:41 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000007.scope: Consumed 44.894s CPU time.
Nov 25 11:03:41 compute-0 systemd-machined[155706]: Machine qemu-10-instance-00000007 terminated.
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.656 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:ff:2a 10.100.0.6'], port_security=['fa:16:3e:82:ff:2a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c4d7af36-620f-46df-8347-4eaeed7856c6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '826c484414ce4e89a03cf37f2359f956', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f94f5308-9585-46c9-858a-5bfd8b44a26c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5e6d622-8d17-4306-9b9d-6c16ad078515, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=5a6cf231-3edc-4338-bb8e-74f0f7e6672d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.657 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 5a6cf231-3edc-4338-bb8e-74f0f7e6672d in datapath 23ecff9c-5f66-4ace-9c23-23cc4a7533de unbound from our chassis
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.659 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23ecff9c-5f66-4ace-9c23-23cc4a7533de, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.660 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[a84b0e6c-30a2-4d25-859c-f89a7b1776a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.661 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de namespace which is not needed anymore
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.661 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.668 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.723 189385 INFO nova.virt.libvirt.driver [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Instance destroyed successfully.
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.725 189385 DEBUG nova.objects.instance [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lazy-loading 'resources' on Instance uuid c4d7af36-620f-46df-8347-4eaeed7856c6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.745 189385 DEBUG nova.virt.libvirt.vif [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:01:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-529149042',display_name='tempest-ServerActionsTestJSON-server-529149042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-529149042',id=7,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDzWJb9N1xKHRqheyAvQfzLJN/1EXZRkwEZB48VX8Av1lPssKsugB7RXaWiGMq0S+O13B7XTAT58mD2UKEKFp3RMSIDEcXXZEClMlcSxvJw62JrrIVelFsyCSZ1uD8LCvQ==',key_name='tempest-keypair-689374724',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:01:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='826c484414ce4e89a03cf37f2359f956',ramdisk_id='',reservation_id='r-g88p5309',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-62183409',owner_user_name='tempest-ServerActionsTestJSON-62183409-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:02:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='28101b622acc41c3aa3608e548b7ef96',uuid=c4d7af36-620f-46df-8347-4eaeed7856c6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.746 189385 DEBUG nova.network.os_vif_util [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converting VIF {"id": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "address": "fa:16:3e:82:ff:2a", "network": {"id": "23ecff9c-5f66-4ace-9c23-23cc4a7533de", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1257722246-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "826c484414ce4e89a03cf37f2359f956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a6cf231-3e", "ovs_interfaceid": "5a6cf231-3edc-4338-bb8e-74f0f7e6672d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.747 189385 DEBUG nova.network.os_vif_util [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.747 189385 DEBUG os_vif [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.749 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.749 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a6cf231-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.751 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.753 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.756 189385 INFO os_vif [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:ff:2a,bridge_name='br-int',has_traffic_filtering=True,id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d,network=Network(23ecff9c-5f66-4ace-9c23-23cc4a7533de),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a6cf231-3e')
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.757 189385 INFO nova.virt.libvirt.driver [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Deleting instance files /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6_del
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.758 189385 INFO nova.virt.libvirt.driver [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Deletion of /var/lib/nova/instances/c4d7af36-620f-46df-8347-4eaeed7856c6_del complete
Nov 25 11:03:41 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[253795]: [NOTICE]   (253799) : haproxy version is 2.8.14-c23fe91
Nov 25 11:03:41 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[253795]: [NOTICE]   (253799) : path to executable is /usr/sbin/haproxy
Nov 25 11:03:41 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[253795]: [WARNING]  (253799) : Exiting Master process...
Nov 25 11:03:41 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[253795]: [ALERT]    (253799) : Current worker (253801) exited with code 143 (Terminated)
Nov 25 11:03:41 compute-0 neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de[253795]: [WARNING]  (253799) : All workers exited. Exiting... (0)
Nov 25 11:03:41 compute-0 systemd[1]: libpod-bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4.scope: Deactivated successfully.
Nov 25 11:03:41 compute-0 conmon[253795]: conmon bfef0d1e369a19bbfb0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4.scope/container/memory.events
Nov 25 11:03:41 compute-0 podman[254366]: 2025-11-25 11:03:41.846918936 +0000 UTC m=+0.078981037 container died bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4-userdata-shm.mount: Deactivated successfully.
Nov 25 11:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-762b7d4d6240c45da4ae05b63ed799c459be121a78a5096c2781c917f0c547ad-merged.mount: Deactivated successfully.
Nov 25 11:03:41 compute-0 podman[254366]: 2025-11-25 11:03:41.901861756 +0000 UTC m=+0.133923867 container cleanup bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 25 11:03:41 compute-0 systemd[1]: libpod-conmon-bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4.scope: Deactivated successfully.
Nov 25 11:03:41 compute-0 podman[254396]: 2025-11-25 11:03:41.975330053 +0000 UTC m=+0.048626059 container remove bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.990 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[485e0506-9cd3-41c0-8bc9-6df020dd2fda]: (4, ('Tue Nov 25 11:03:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de (bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4)\nbfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4\nTue Nov 25 11:03:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de (bfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4)\nbfef0d1e369a19bbfb0e57cd5ee41f5e71db984c01962ce4d61d4b2daeb71ad4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.991 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[15f47c5d-6b7c-4b18-b8c2-d32602c9770a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:41.992 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23ecff9c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:03:41 compute-0 nova_compute[189381]: 2025-11-25 11:03:41.994 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:41 compute-0 kernel: tap23ecff9c-50: left promiscuous mode
Nov 25 11:03:42 compute-0 nova_compute[189381]: 2025-11-25 11:03:42.006 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:42.008 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[bdfe0933-405c-479f-9f80-a24563c489d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:42 compute-0 nova_compute[189381]: 2025-11-25 11:03:42.009 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:42.024 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1a3e19-b52b-47fa-905b-2f99b3825c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:42.025 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d97ac2-d30f-440e-be47-9780a5e81da3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:42.039 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[089bffd1-7566-4dbf-aff2-fc442d1f21bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 545662, 'reachable_time': 42362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254411, 'error': None, 'target': 'ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d23ecff9c\x2d5f66\x2d4ace\x2d9c23\x2d23cc4a7533de.mount: Deactivated successfully.
Nov 25 11:03:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:42.041 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-23ecff9c-5f66-4ace-9c23-23cc4a7533de deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:03:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:03:42.041 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbe68ca-27c8-4cdc-9d18-26cc8ee11a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:03:42 compute-0 podman[254408]: 2025-11-25 11:03:42.087607162 +0000 UTC m=+0.057577097 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, architecture=x86_64, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Nov 25 11:03:42 compute-0 nova_compute[189381]: 2025-11-25 11:03:42.576 189385 INFO nova.compute.manager [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Took 1.15 seconds to destroy the instance on the hypervisor.
Nov 25 11:03:42 compute-0 nova_compute[189381]: 2025-11-25 11:03:42.576 189385 DEBUG oslo.service.loopingcall [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:03:42 compute-0 nova_compute[189381]: 2025-11-25 11:03:42.577 189385 DEBUG nova.compute.manager [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:03:42 compute-0 nova_compute[189381]: 2025-11-25 11:03:42.577 189385 DEBUG nova.network.neutron [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:03:43 compute-0 nova_compute[189381]: 2025-11-25 11:03:43.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:43 compute-0 nova_compute[189381]: 2025-11-25 11:03:43.481 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:46 compute-0 nova_compute[189381]: 2025-11-25 11:03:46.753 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:47 compute-0 podman[254432]: 2025-11-25 11:03:47.940912006 +0000 UTC m=+0.059273946 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 11:03:48 compute-0 nova_compute[189381]: 2025-11-25 11:03:48.483 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:48 compute-0 nova_compute[189381]: 2025-11-25 11:03:48.533 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:51 compute-0 nova_compute[189381]: 2025-11-25 11:03:51.756 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:51 compute-0 podman[254452]: 2025-11-25 11:03:51.945744587 +0000 UTC m=+0.059502304 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:03:51 compute-0 nova_compute[189381]: 2025-11-25 11:03:51.953 189385 DEBUG nova.network.neutron [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:03:51 compute-0 podman[254451]: 2025-11-25 11:03:51.975110037 +0000 UTC m=+0.092851940 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 11:03:51 compute-0 nova_compute[189381]: 2025-11-25 11:03:51.985 189385 DEBUG nova.compute.manager [req-cb12281d-111f-4638-a95f-6fe27fb0a89c req-3fd1dd28-c73b-45f6-bbbf-46cc055531a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Received event network-vif-deleted-5a6cf231-3edc-4338-bb8e-74f0f7e6672d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:03:51 compute-0 nova_compute[189381]: 2025-11-25 11:03:51.985 189385 INFO nova.compute.manager [req-cb12281d-111f-4638-a95f-6fe27fb0a89c req-3fd1dd28-c73b-45f6-bbbf-46cc055531a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Neutron deleted interface 5a6cf231-3edc-4338-bb8e-74f0f7e6672d; detaching it from the instance and deleting it from the info cache
Nov 25 11:03:51 compute-0 nova_compute[189381]: 2025-11-25 11:03:51.985 189385 DEBUG nova.network.neutron [req-cb12281d-111f-4638-a95f-6fe27fb0a89c req-3fd1dd28-c73b-45f6-bbbf-46cc055531a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.211 189385 INFO nova.compute.manager [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Took 9.63 seconds to deallocate network for instance.
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.223 189385 DEBUG nova.compute.manager [req-cb12281d-111f-4638-a95f-6fe27fb0a89c req-3fd1dd28-c73b-45f6-bbbf-46cc055531a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Detach interface failed, port_id=5a6cf231-3edc-4338-bb8e-74f0f7e6672d, reason: Instance c4d7af36-620f-46df-8347-4eaeed7856c6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.403 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.404 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.661 189385 DEBUG nova.compute.provider_tree [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.677 189385 DEBUG nova.scheduler.client.report [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.712 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:52 compute-0 nova_compute[189381]: 2025-11-25 11:03:52.821 189385 INFO nova.scheduler.client.report [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Deleted allocations for instance c4d7af36-620f-46df-8347-4eaeed7856c6
Nov 25 11:03:53 compute-0 nova_compute[189381]: 2025-11-25 11:03:53.254 189385 DEBUG oslo_concurrency.lockutils [None req-84c1c439-649f-4540-a4b0-1b61b49ccfc0 28101b622acc41c3aa3608e548b7ef96 826c484414ce4e89a03cf37f2359f956 - - default default] Lock "c4d7af36-620f-46df-8347-4eaeed7856c6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:53 compute-0 nova_compute[189381]: 2025-11-25 11:03:53.487 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:55 compute-0 podman[254494]: 2025-11-25 11:03:55.98110621 +0000 UTC m=+0.091989354 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Nov 25 11:03:55 compute-0 podman[254493]: 2025-11-25 11:03:55.984444087 +0000 UTC m=+0.098137882 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.022 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.022 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.022 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.023 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.023 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.023 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.047 189385 DEBUG nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.053 189385 DEBUG nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.054 189385 WARNING nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.054 189385 WARNING nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.054 189385 WARNING nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.054 189385 INFO nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Removable base files: /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87 /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.054 189385 INFO nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/efa46ac01001129056abbd05fc9719c35c46db87
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.054 189385 INFO nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/2f0b4681cd51b11d0e715ed9a7bc9065a87be20c
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.055 189385 INFO nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.055 189385 DEBUG nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.055 189385 DEBUG nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.055 189385 DEBUG nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.055 189385 INFO nova.virt.libvirt.imagecache [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.719 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068621.717849, c4d7af36-620f-46df-8347-4eaeed7856c6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.719 189385 INFO nova.compute.manager [-] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] VM Stopped (Lifecycle Event)
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.744 189385 DEBUG nova.compute.manager [None req-de73ef07-829d-4623-8a6d-1f27c5b212bb - - - - - -] [instance: c4d7af36-620f-46df-8347-4eaeed7856c6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:03:56 compute-0 nova_compute[189381]: 2025-11-25 11:03:56.759 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:58 compute-0 nova_compute[189381]: 2025-11-25 11:03:58.489 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:03:59 compute-0 podman[203557]: time="2025-11-25T11:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:03:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:03:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 25 11:03:59 compute-0 podman[254538]: 2025-11-25 11:03:59.946721145 +0000 UTC m=+0.060083740 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:04:01 compute-0 openstack_network_exporter[205722]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:04:01 compute-0 openstack_network_exporter[205722]: ERROR   11:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:04:01 compute-0 openstack_network_exporter[205722]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:04:01 compute-0 openstack_network_exporter[205722]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:04:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:04:01 compute-0 openstack_network_exporter[205722]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:04:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:04:01 compute-0 nova_compute[189381]: 2025-11-25 11:04:01.761 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:03 compute-0 nova_compute[189381]: 2025-11-25 11:04:03.490 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:03 compute-0 nova_compute[189381]: 2025-11-25 11:04:03.889 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:06 compute-0 nova_compute[189381]: 2025-11-25 11:04:06.764 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:08 compute-0 nova_compute[189381]: 2025-11-25 11:04:08.417 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:08 compute-0 nova_compute[189381]: 2025-11-25 11:04:08.492 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:09 compute-0 podman[254562]: 2025-11-25 11:04:09.952332189 +0000 UTC m=+0.065502967 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:04:09 compute-0 podman[254561]: 2025-11-25 11:04:09.975251652 +0000 UTC m=+0.092941741 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844)
Nov 25 11:04:11 compute-0 nova_compute[189381]: 2025-11-25 11:04:11.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:11 compute-0 nova_compute[189381]: 2025-11-25 11:04:11.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 11:04:11 compute-0 nova_compute[189381]: 2025-11-25 11:04:11.767 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:13 compute-0 podman[254601]: 2025-11-25 11:04:13.012430833 +0000 UTC m=+0.112555299 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, build-date=2024-09-18T21:23:30, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Nov 25 11:04:13 compute-0 nova_compute[189381]: 2025-11-25 11:04:13.494 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:13 compute-0 nova_compute[189381]: 2025-11-25 11:04:13.898 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:14 compute-0 nova_compute[189381]: 2025-11-25 11:04:14.156 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:16 compute-0 nova_compute[189381]: 2025-11-25 11:04:16.769 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:18 compute-0 nova_compute[189381]: 2025-11-25 11:04:18.496 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:18 compute-0 podman[254622]: 2025-11-25 11:04:18.952986491 +0000 UTC m=+0.069224575 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.007 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.008 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.034 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.248 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.249 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.303 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.303 189385 INFO nova.compute.claims [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.495 189385 DEBUG nova.compute.provider_tree [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:04:20 compute-0 nova_compute[189381]: 2025-11-25 11:04:20.515 189385 DEBUG nova.scheduler.client.report [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:04:21 compute-0 nova_compute[189381]: 2025-11-25 11:04:21.772 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.061 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.062 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.209 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.210 189385 DEBUG nova.network.neutron [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.255 189385 INFO nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.285 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.466 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.468 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.468 189385 INFO nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Creating image(s)
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.469 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "/var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.469 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "/var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.470 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "/var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.471 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:22 compute-0 nova_compute[189381]: 2025-11-25 11:04:22.471 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:22 compute-0 podman[254642]: 2025-11-25 11:04:22.941327034 +0000 UTC m=+0.057948079 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal)
Nov 25 11:04:22 compute-0 podman[254643]: 2025-11-25 11:04:22.947282846 +0000 UTC m=+0.059045450 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:04:23 compute-0 nova_compute[189381]: 2025-11-25 11:04:23.184 189385 DEBUG nova.policy [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:04:23 compute-0 nova_compute[189381]: 2025-11-25 11:04:23.498 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:25 compute-0 nova_compute[189381]: 2025-11-25 11:04:25.091 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:25 compute-0 nova_compute[189381]: 2025-11-25 11:04:25.092 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 11:04:25 compute-0 nova_compute[189381]: 2025-11-25 11:04:25.133 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 11:04:26 compute-0 nova_compute[189381]: 2025-11-25 11:04:26.776 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:27 compute-0 podman[254688]: 2025-11-25 11:04:27.01101268 +0000 UTC m=+0.115257677 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:04:27 compute-0 podman[254687]: 2025-11-25 11:04:27.05350014 +0000 UTC m=+0.161665720 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.064 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.088 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.152 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.part --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.154 189385 DEBUG nova.virt.images [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] 62ab6b08-ec10-4838-aa81-24150af36537 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.156 189385 DEBUG nova.privsep.utils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.156 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.part /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.421 189385 DEBUG nova.network.neutron [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Successfully created port: 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.428 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.part /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.converted" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.437 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.523 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe.converted --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.526 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 5.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.558 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.627 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.628 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.629 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.644 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.711 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.713 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe,backing_fmt=raw /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.759 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe,backing_fmt=raw /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.761 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.762 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.831 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.833 189385 DEBUG nova.virt.disk.api [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Checking if we can resize image /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.833 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.903 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.905 189385 DEBUG nova.virt.disk.api [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Cannot resize image /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.905 189385 DEBUG nova.objects.instance [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lazy-loading 'migration_context' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.923 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.924 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Ensure instance console log exists: /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.924 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.925 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:27 compute-0 nova_compute[189381]: 2025-11-25 11:04:27.925 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:28 compute-0 nova_compute[189381]: 2025-11-25 11:04:28.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:28 compute-0 nova_compute[189381]: 2025-11-25 11:04:28.500 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:29 compute-0 podman[203557]: time="2025-11-25T11:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:04:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:04:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Nov 25 11:04:30 compute-0 podman[254755]: 2025-11-25 11:04:30.945911896 +0000 UTC m=+0.056824556 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.174 189385 DEBUG nova.network.neutron [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Successfully updated port: 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.201 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.202 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.202 189385 DEBUG nova.network.neutron [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:04:31 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.382 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.382 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:31 compute-0 openstack_network_exporter[205722]: ERROR   11:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:04:31 compute-0 openstack_network_exporter[205722]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:04:31 compute-0 openstack_network_exporter[205722]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:04:31 compute-0 openstack_network_exporter[205722]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:04:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:04:31 compute-0 openstack_network_exporter[205722]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:04:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.432 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.557 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.557 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.568 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.569 189385 INFO nova.compute.claims [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:04:31 compute-0 nova_compute[189381]: 2025-11-25 11:04:31.779 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.193 189385 DEBUG nova.compute.provider_tree [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.207 189385 DEBUG nova.scheduler.client.report [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.219 189385 DEBUG nova.network.neutron [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.300 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.300 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.619 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.619 189385 DEBUG nova.network.neutron [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.658 189385 INFO nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:04:32 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:32.666 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:04:32 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:32.667 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.669 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.699 189385 DEBUG nova.compute.manager [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received event network-changed-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.700 189385 DEBUG nova.compute.manager [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Refreshing instance network info cache due to event network-changed-6ed45132-26d0-4000-b0b9-bb7c45ac85f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.701 189385 DEBUG oslo_concurrency.lockutils [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:04:32 compute-0 nova_compute[189381]: 2025-11-25 11:04:32.707 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.001 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.003 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.004 189385 INFO nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Creating image(s)
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.004 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "/var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.005 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "/var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.005 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "/var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.019 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.038 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.072 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.072 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.073 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.073 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.092 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.093 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.094 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.107 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.169 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.170 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.214 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.215 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.216 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.276 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.277 189385 DEBUG nova.virt.disk.api [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Checking if we can resize image /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.277 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.348 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.349 189385 DEBUG nova.virt.disk.api [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Cannot resize image /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.350 189385 DEBUG nova.objects.instance [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lazy-loading 'migration_context' on Instance uuid 709ba638-65f8-4345-b8ca-b969e9719f92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.361 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.362 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Ensure instance console log exists: /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.362 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.363 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.363 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.378 189385 DEBUG nova.policy [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '63532fa3761a42a3a6f2dbb256ccd5d1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2013a3a878cf48c19ee356b2eb249216', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.502 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.505 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.506 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5336MB free_disk=72.12926483154297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.506 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.506 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.863 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.863 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 709ba638-65f8-4345-b8ca-b969e9719f92 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.864 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.864 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.938 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:04:33 compute-0 nova_compute[189381]: 2025-11-25 11:04:33.955 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:04:34 compute-0 nova_compute[189381]: 2025-11-25 11:04:34.079 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:04:34 compute-0 nova_compute[189381]: 2025-11-25 11:04:34.080 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:35 compute-0 nova_compute[189381]: 2025-11-25 11:04:35.063 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:35 compute-0 nova_compute[189381]: 2025-11-25 11:04:35.064 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:04:35 compute-0 nova_compute[189381]: 2025-11-25 11:04:35.064 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:04:35 compute-0 nova_compute[189381]: 2025-11-25 11:04:35.086 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 11:04:35 compute-0 nova_compute[189381]: 2025-11-25 11:04:35.086 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 11:04:35 compute-0 nova_compute[189381]: 2025-11-25 11:04:35.086 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 11:04:35 compute-0 nova_compute[189381]: 2025-11-25 11:04:35.086 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:35.673 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:36.070 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:36.070 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:36.071 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:36 compute-0 nova_compute[189381]: 2025-11-25 11:04:36.782 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:37 compute-0 nova_compute[189381]: 2025-11-25 11:04:37.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:37 compute-0 nova_compute[189381]: 2025-11-25 11:04:37.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:38 compute-0 nova_compute[189381]: 2025-11-25 11:04:38.505 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.512 189385 DEBUG nova.network.neutron [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.588 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.589 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Instance network_info: |[{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.590 189385 DEBUG oslo_concurrency.lockutils [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.590 189385 DEBUG nova.network.neutron [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Refreshing network info cache for port 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.594 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Start _get_guest_xml network_info=[{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T11:04:01Z,direct_url=<?>,disk_format='qcow2',id=62ab6b08-ec10-4838-aa81-24150af36537,min_disk=0,min_ram=0,name='tempest-scenario-img--502157881',owner='d057fe4d034a4f13b6e08dc8083cad5b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T11:04:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': '62ab6b08-ec10-4838-aa81-24150af36537'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.603 189385 WARNING nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.610 189385 DEBUG nova.virt.libvirt.host [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.611 189385 DEBUG nova.virt.libvirt.host [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.622 189385 DEBUG nova.virt.libvirt.host [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.623 189385 DEBUG nova.virt.libvirt.host [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.623 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.623 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T11:04:01Z,direct_url=<?>,disk_format='qcow2',id=62ab6b08-ec10-4838-aa81-24150af36537,min_disk=0,min_ram=0,name='tempest-scenario-img--502157881',owner='d057fe4d034a4f13b6e08dc8083cad5b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T11:04:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.624 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.624 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.625 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.625 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.625 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.626 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.626 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.627 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.627 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.627 189385 DEBUG nova.virt.hardware [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.631 189385 DEBUG nova.virt.libvirt.vif [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4',id=10,image_ref='62ab6b08-ec10-4838-aa81-24150af36537',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f33016ec-000f-44cf-b7cc-2122723ba143'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d057fe4d034a4f13b6e08dc8083cad5b',ramdisk_id='',reservation_id='r-3jhjkex5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ab6b08-ec10-4838-aa81-24150af36537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1327093183',owner_user_name='tempest-PrometheusGabbiTest-1327093183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:04:22Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='95acdf386c1e42c8a6da1f7b9603054f',uuid=18a30ced-09e6-4c6a-9ea3-4c59f437a71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.631 189385 DEBUG nova.network.os_vif_util [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converting VIF {"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.632 189385 DEBUG nova.network.os_vif_util [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:bc:05,bridge_name='br-int',has_traffic_filtering=True,id=6ed45132-26d0-4000-b0b9-bb7c45ac85f7,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed45132-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.633 189385 DEBUG nova.objects.instance [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lazy-loading 'pci_devices' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.645 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <uuid>18a30ced-09e6-4c6a-9ea3-4c59f437a71a</uuid>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <name>instance-0000000a</name>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <nova:name>te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4</nova:name>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:04:39</nova:creationTime>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:user uuid="95acdf386c1e42c8a6da1f7b9603054f">tempest-PrometheusGabbiTest-1327093183-project-member</nova:user>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:project uuid="d057fe4d034a4f13b6e08dc8083cad5b">tempest-PrometheusGabbiTest-1327093183</nova:project>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="62ab6b08-ec10-4838-aa81-24150af36537"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         <nova:port uuid="6ed45132-26d0-4000-b0b9-bb7c45ac85f7">
Nov 25 11:04:39 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.2.10" ipVersion="4"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <system>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <entry name="serial">18a30ced-09e6-4c6a-9ea3-4c59f437a71a</entry>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <entry name="uuid">18a30ced-09e6-4c6a-9ea3-4c59f437a71a</entry>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </system>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <os>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   </os>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <features>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   </features>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.config"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:fd:bc:05"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <target dev="tap6ed45132-26"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/console.log" append="off"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <video>
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </video>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:04:39 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:04:39 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:04:39 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:04:39 compute-0 nova_compute[189381]: </domain>
Nov 25 11:04:39 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.647 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Preparing to wait for external event network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.647 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.648 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.648 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.649 189385 DEBUG nova.virt.libvirt.vif [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4',id=10,image_ref='62ab6b08-ec10-4838-aa81-24150af36537',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f33016ec-000f-44cf-b7cc-2122723ba143'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d057fe4d034a4f13b6e08dc8083cad5b',ramdisk_id='',reservation_id='r-3jhjkex5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ab6b08-ec10-4838-aa81-24150af36537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1327093183',owner_user_name='tempest-PrometheusGabbiTest-1327093183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:04:22Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='95acdf386c1e42c8a6da1f7b9603054f',uuid=18a30ced-09e6-4c6a-9ea3-4c59f437a71a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.649 189385 DEBUG nova.network.os_vif_util [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converting VIF {"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.650 189385 DEBUG nova.network.os_vif_util [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:bc:05,bridge_name='br-int',has_traffic_filtering=True,id=6ed45132-26d0-4000-b0b9-bb7c45ac85f7,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed45132-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.651 189385 DEBUG os_vif [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:bc:05,bridge_name='br-int',has_traffic_filtering=True,id=6ed45132-26d0-4000-b0b9-bb7c45ac85f7,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed45132-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.652 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.652 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.653 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.657 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.658 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ed45132-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.658 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ed45132-26, col_values=(('external_ids', {'iface-id': '6ed45132-26d0-4000-b0b9-bb7c45ac85f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:bc:05', 'vm-uuid': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.660 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:39 compute-0 NetworkManager[56317]: <info>  [1764068679.6623] manager: (tap6ed45132-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.663 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.673 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.675 189385 INFO os_vif [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:bc:05,bridge_name='br-int',has_traffic_filtering=True,id=6ed45132-26d0-4000-b0b9-bb7c45ac85f7,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed45132-26')
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.726 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.726 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.727 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] No VIF found with MAC fa:16:3e:fd:bc:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:04:39 compute-0 nova_compute[189381]: 2025-11-25 11:04:39.727 189385 INFO nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Using config drive
Nov 25 11:04:40 compute-0 nova_compute[189381]: 2025-11-25 11:04:40.125 189385 DEBUG nova.network.neutron [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Successfully created port: a1692084-6415-42ca-acb4-a814c874f56a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:04:40 compute-0 podman[254798]: 2025-11-25 11:04:40.247705176 +0000 UTC m=+0.094156727 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:04:40 compute-0 podman[254799]: 2025-11-25 11:04:40.258476027 +0000 UTC m=+0.099425928 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.248 189385 INFO nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Creating config drive at /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.config
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.255 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04c2mnyx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.388 189385 DEBUG oslo_concurrency.processutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp04c2mnyx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:41 compute-0 kernel: tap6ed45132-26: entered promiscuous mode
Nov 25 11:04:41 compute-0 ovn_controller[97779]: 2025-11-25T11:04:41Z|00129|binding|INFO|Claiming lport 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 for this chassis.
Nov 25 11:04:41 compute-0 NetworkManager[56317]: <info>  [1764068681.4540] manager: (tap6ed45132-26): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Nov 25 11:04:41 compute-0 ovn_controller[97779]: 2025-11-25T11:04:41Z|00130|binding|INFO|6ed45132-26d0-4000-b0b9-bb7c45ac85f7: Claiming fa:16:3e:fd:bc:05 10.100.2.10
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.455 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:41 compute-0 systemd-udevd[254853]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.493 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.499 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:41 compute-0 ovn_controller[97779]: 2025-11-25T11:04:41Z|00131|binding|INFO|Setting lport 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 ovn-installed in OVS
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.501 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:41 compute-0 systemd-machined[155706]: New machine qemu-11-instance-0000000a.
Nov 25 11:04:41 compute-0 NetworkManager[56317]: <info>  [1764068681.5101] device (tap6ed45132-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:04:41 compute-0 NetworkManager[56317]: <info>  [1764068681.5109] device (tap6ed45132-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:04:41 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.849 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068681.8480227, 18a30ced-09e6-4c6a-9ea3-4c59f437a71a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:04:41 compute-0 nova_compute[189381]: 2025-11-25 11:04:41.849 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] VM Started (Lifecycle Event)
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.069 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.075 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068681.8481288, 18a30ced-09e6-4c6a-9ea3-4c59f437a71a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.076 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] VM Paused (Lifecycle Event)
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.112 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.117 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.148 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:04:42 compute-0 ovn_controller[97779]: 2025-11-25T11:04:42Z|00132|binding|INFO|Setting lport 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 up in Southbound
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.307 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:bc:05 10.100.2.10'], port_security=['fa:16:3e:fd:bc:05 10.100.2.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.10/16', 'neutron:device_id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6dd922d1-432e-41c0-9438-975e4d0bc760', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da371dea-a01c-4170-8065-7d1b11a4ac95, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=6ed45132-26d0-4000-b0b9-bb7c45ac85f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.309 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 in datapath a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 bound to our chassis
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.310 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.322 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8ca44918-c259-4472-a24b-af69764a2d50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.323 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa82a38fb-81 in ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.324 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa82a38fb-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.324 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[a6b731bb-7773-472c-b8a0-be0c51aa5692]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.325 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2fdcd5-65c1-4334-a93e-eab17f291b1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.336 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[d84e1425-e575-4669-9534-f843727b4b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.361 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[312450ff-53c4-4953-9199-13242c6cdb01]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.389 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[a21c574c-2e55-4b2a-a7da-8fa882d051f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 systemd-udevd[254856]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.395 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[59ee7888-1440-4741-a5f8-3433a47628f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 NetworkManager[56317]: <info>  [1764068682.3961] manager: (tapa82a38fb-80): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.425 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[3ddaf01d-067d-4a40-953e-a069474c2c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.428 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2c4a8d-1628-4daf-bb57-b9e28267796b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 NetworkManager[56317]: <info>  [1764068682.4509] device (tapa82a38fb-80): carrier: link connected
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.457 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[5680d6d1-765b-4602-a34d-11a8b002930f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.472 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0e236b2f-c9d4-439f-86f6-9041b71f86d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa82a38fb-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:c9:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559003, 'reachable_time': 43940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254895, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.486 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa61381-d384-44c7-8568-9be986dd61f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:c978'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559003, 'tstamp': 559003}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254896, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.501 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[cea80e81-0a18-458c-88c7-461263080753]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa82a38fb-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:c9:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559003, 'reachable_time': 43940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254897, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.534 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[69414233-2050-4560-8bda-625a135edfb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.604 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0660d071-eac9-4a5a-ac43-d29170bbde07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.605 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa82a38fb-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.606 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.606 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa82a38fb-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.609 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:42 compute-0 NetworkManager[56317]: <info>  [1764068682.6100] manager: (tapa82a38fb-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 25 11:04:42 compute-0 kernel: tapa82a38fb-80: entered promiscuous mode
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.613 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.614 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa82a38fb-80, col_values=(('external_ids', {'iface-id': '915e80eb-5def-4cf6-b65e-79eab93b7232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.616 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:42 compute-0 ovn_controller[97779]: 2025-11-25T11:04:42Z|00133|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=1)
Nov 25 11:04:42 compute-0 nova_compute[189381]: 2025-11-25 11:04:42.630 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.631 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.632 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[681511d9-893a-4621-b9aa-5d8f524faa67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.633 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5.pid.haproxy
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:04:42 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:42.634 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'env', 'PROCESS_TAG=haproxy-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:04:43 compute-0 nova_compute[189381]: 2025-11-25 11:04:43.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:43 compute-0 podman[254928]: 2025-11-25 11:04:43.038426823 +0000 UTC m=+0.065995601 container create 97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 25 11:04:43 compute-0 systemd[1]: Started libpod-conmon-97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066.scope.
Nov 25 11:04:43 compute-0 podman[254928]: 2025-11-25 11:04:43.003929085 +0000 UTC m=+0.031497883 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:04:43 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:04:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab000c5d34714a0115083fdf3255d3c45c583d1ff6fc38609ff6d7643317f69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:43 compute-0 ovn_controller[97779]: 2025-11-25T11:04:43Z|00134|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:04:43 compute-0 podman[254928]: 2025-11-25 11:04:43.127545993 +0000 UTC m=+0.155114791 container init 97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:04:43 compute-0 podman[254928]: 2025-11-25 11:04:43.136596415 +0000 UTC m=+0.164165193 container start 97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:04:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 25 11:04:43 compute-0 neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5[254944]: [NOTICE]   (254961) : New worker (254967) forked
Nov 25 11:04:43 compute-0 neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5[254944]: [NOTICE]   (254961) : Loading success.
Nov 25 11:04:43 compute-0 podman[254941]: 2025-11-25 11:04:43.185950633 +0000 UTC m=+0.106740020 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, config_id=edpm, com.redhat.component=ubi9-container, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc.)
Nov 25 11:04:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 25 11:04:43 compute-0 nova_compute[189381]: 2025-11-25 11:04:43.508 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:44 compute-0 nova_compute[189381]: 2025-11-25 11:04:44.665 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:48 compute-0 nova_compute[189381]: 2025-11-25 11:04:48.189 189385 DEBUG nova.network.neutron [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated VIF entry in instance network info cache for port 6ed45132-26d0-4000-b0b9-bb7c45ac85f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:04:48 compute-0 nova_compute[189381]: 2025-11-25 11:04:48.190 189385 DEBUG nova.network.neutron [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:04:48 compute-0 nova_compute[189381]: 2025-11-25 11:04:48.222 189385 DEBUG oslo_concurrency.lockutils [req-2b523221-8e48-4e6a-98d7-79cd2ba76201 req-7e39f442-4e9a-46e5-937c-2cd66f86ef6e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:04:48 compute-0 nova_compute[189381]: 2025-11-25 11:04:48.510 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:48 compute-0 nova_compute[189381]: 2025-11-25 11:04:48.833 189385 DEBUG nova.network.neutron [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Successfully updated port: a1692084-6415-42ca-acb4-a814c874f56a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.269 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.270 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquired lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.270 189385 DEBUG nova.network.neutron [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.386 189385 DEBUG nova.compute.manager [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-changed-a1692084-6415-42ca-acb4-a814c874f56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.386 189385 DEBUG nova.compute.manager [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Refreshing instance network info cache due to event network-changed-a1692084-6415-42ca-acb4-a814c874f56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.387 189385 DEBUG oslo_concurrency.lockutils [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.668 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:49 compute-0 nova_compute[189381]: 2025-11-25 11:04:49.673 189385 DEBUG nova.network.neutron [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:04:49 compute-0 podman[254996]: 2025-11-25 11:04:49.944678845 +0000 UTC m=+0.057794495 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:04:51 compute-0 nova_compute[189381]: 2025-11-25 11:04:51.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.077 189385 DEBUG nova.network.neutron [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Updating instance_info_cache with network_info: [{"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.129 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Releasing lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.130 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Instance network_info: |[{"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.131 189385 DEBUG oslo_concurrency.lockutils [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.131 189385 DEBUG nova.network.neutron [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Refreshing network info cache for port a1692084-6415-42ca-acb4-a814c874f56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.134 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Start _get_guest_xml network_info=[{"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.143 189385 WARNING nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.164 189385 DEBUG nova.virt.libvirt.host [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.165 189385 DEBUG nova.virt.libvirt.host [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.172 189385 DEBUG nova.virt.libvirt.host [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.173 189385 DEBUG nova.virt.libvirt.host [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.173 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.174 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.174 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.174 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.175 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.175 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.175 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.176 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.176 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.177 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.177 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.177 189385 DEBUG nova.virt.hardware [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.181 189385 DEBUG nova.virt.libvirt.vif [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-302335864',display_name='tempest-ServersTestManualDisk-server-302335864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-302335864',id=11,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7VfsknKdPGSeYQPwpNA8eRPA5K3rJY2apsdPtpmPbd1OcEsvJFk+7j2c/rIkrzWInP/ugRSYoulK3pMe/yztCughmVNc4bMj9IfCCNbRDUmbY13nBEkqFLtcUTz5NLHA==',key_name='tempest-keypair-534650904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2013a3a878cf48c19ee356b2eb249216',ramdisk_id='',reservation_id='r-5865stpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1517765642',owner_user_name='tempest-ServersTestManualDisk-1517765642-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:04:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='63532fa3761a42a3a6f2dbb256ccd5d1',uuid=709ba638-65f8-4345-b8ca-b969e9719f92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.181 189385 DEBUG nova.network.os_vif_util [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Converting VIF {"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.182 189385 DEBUG nova.network.os_vif_util [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:99:84,bridge_name='br-int',has_traffic_filtering=True,id=a1692084-6415-42ca-acb4-a814c874f56a,network=Network(d737f52f-9bd2-4fa0-b695-15c08aea25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1692084-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.183 189385 DEBUG nova.objects.instance [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lazy-loading 'pci_devices' on Instance uuid 709ba638-65f8-4345-b8ca-b969e9719f92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.201 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <uuid>709ba638-65f8-4345-b8ca-b969e9719f92</uuid>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <name>instance-0000000b</name>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <nova:name>tempest-ServersTestManualDisk-server-302335864</nova:name>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:04:52</nova:creationTime>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:user uuid="63532fa3761a42a3a6f2dbb256ccd5d1">tempest-ServersTestManualDisk-1517765642-project-member</nova:user>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:project uuid="2013a3a878cf48c19ee356b2eb249216">tempest-ServersTestManualDisk-1517765642</nova:project>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         <nova:port uuid="a1692084-6415-42ca-acb4-a814c874f56a">
Nov 25 11:04:52 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <system>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <entry name="serial">709ba638-65f8-4345-b8ca-b969e9719f92</entry>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <entry name="uuid">709ba638-65f8-4345-b8ca-b969e9719f92</entry>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </system>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <os>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   </os>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <features>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   </features>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk.config"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:25:99:84"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <target dev="tapa1692084-64"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/console.log" append="off"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <video>
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </video>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:04:52 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:04:52 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:04:52 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:04:52 compute-0 nova_compute[189381]: </domain>
Nov 25 11:04:52 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.202 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Preparing to wait for external event network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.202 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.203 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.203 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.203 189385 DEBUG nova.virt.libvirt.vif [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-302335864',display_name='tempest-ServersTestManualDisk-server-302335864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-302335864',id=11,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7VfsknKdPGSeYQPwpNA8eRPA5K3rJY2apsdPtpmPbd1OcEsvJFk+7j2c/rIkrzWInP/ugRSYoulK3pMe/yztCughmVNc4bMj9IfCCNbRDUmbY13nBEkqFLtcUTz5NLHA==',key_name='tempest-keypair-534650904',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2013a3a878cf48c19ee356b2eb249216',ramdisk_id='',reservation_id='r-5865stpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1517765642',owner_user_name='tempest-ServersTestManualDisk-1517765642-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:04:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='63532fa3761a42a3a6f2dbb256ccd5d1',uuid=709ba638-65f8-4345-b8ca-b969e9719f92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.204 189385 DEBUG nova.network.os_vif_util [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Converting VIF {"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.204 189385 DEBUG nova.network.os_vif_util [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:99:84,bridge_name='br-int',has_traffic_filtering=True,id=a1692084-6415-42ca-acb4-a814c874f56a,network=Network(d737f52f-9bd2-4fa0-b695-15c08aea25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1692084-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.205 189385 DEBUG os_vif [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:99:84,bridge_name='br-int',has_traffic_filtering=True,id=a1692084-6415-42ca-acb4-a814c874f56a,network=Network(d737f52f-9bd2-4fa0-b695-15c08aea25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1692084-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.205 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.206 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.206 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.209 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.209 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa1692084-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.210 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa1692084-64, col_values=(('external_ids', {'iface-id': 'a1692084-6415-42ca-acb4-a814c874f56a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:99:84', 'vm-uuid': '709ba638-65f8-4345-b8ca-b969e9719f92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:52 compute-0 NetworkManager[56317]: <info>  [1764068692.2127] manager: (tapa1692084-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.215 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.221 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.221 189385 INFO os_vif [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:99:84,bridge_name='br-int',has_traffic_filtering=True,id=a1692084-6415-42ca-acb4-a814c874f56a,network=Network(d737f52f-9bd2-4fa0-b695-15c08aea25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1692084-64')
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.332 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.333 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.334 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] No VIF found with MAC fa:16:3e:25:99:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:04:52 compute-0 nova_compute[189381]: 2025-11-25 11:04:52.335 189385 INFO nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Using config drive
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.152 189385 INFO nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Creating config drive at /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk.config
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.161 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeixgtcil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.291 189385 DEBUG oslo_concurrency.processutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeixgtcil" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:04:53 compute-0 NetworkManager[56317]: <info>  [1764068693.3629] manager: (tapa1692084-64): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 25 11:04:53 compute-0 kernel: tapa1692084-64: entered promiscuous mode
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.367 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 ovn_controller[97779]: 2025-11-25T11:04:53Z|00135|binding|INFO|Claiming lport a1692084-6415-42ca-acb4-a814c874f56a for this chassis.
Nov 25 11:04:53 compute-0 ovn_controller[97779]: 2025-11-25T11:04:53Z|00136|binding|INFO|a1692084-6415-42ca-acb4-a814c874f56a: Claiming fa:16:3e:25:99:84 10.100.0.14
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.390 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:99:84 10.100.0.14'], port_security=['fa:16:3e:25:99:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '709ba638-65f8-4345-b8ca-b969e9719f92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2013a3a878cf48c19ee356b2eb249216', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f01b6e2c-f9cc-4aa3-addf-dc4f86a1ec40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb5d16f2-4a4d-461a-be64-340216e2f14c, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=a1692084-6415-42ca-acb4-a814c874f56a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.391 106634 INFO neutron.agent.ovn.metadata.agent [-] Port a1692084-6415-42ca-acb4-a814c874f56a in datapath d737f52f-9bd2-4fa0-b695-15c08aea25ba bound to our chassis
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.393 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d737f52f-9bd2-4fa0-b695-15c08aea25ba
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.401 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2838dbd6-41aa-4486-9445-42f9124ea633]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.402 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd737f52f-91 in ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.404 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd737f52f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.404 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[77eee4e6-5c42-4dc7-9bcf-319892f2e9fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.405 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[10845d2d-3048-49fe-b098-cc09ff480fed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.418 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[893f6d35-951a-44d4-bfec-48dd4d93ade6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 systemd-machined[155706]: New machine qemu-12-instance-0000000b.
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.429 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 ovn_controller[97779]: 2025-11-25T11:04:53Z|00137|binding|INFO|Setting lport a1692084-6415-42ca-acb4-a814c874f56a ovn-installed in OVS
Nov 25 11:04:53 compute-0 ovn_controller[97779]: 2025-11-25T11:04:53Z|00138|binding|INFO|Setting lport a1692084-6415-42ca-acb4-a814c874f56a up in Southbound
Nov 25 11:04:53 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.435 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.443 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc0e96d-f4db-4402-b876-2d5db1ab33ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 systemd-udevd[255079]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:04:53 compute-0 podman[255029]: 2025-11-25 11:04:53.451699656 +0000 UTC m=+0.096656058 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 11:04:53 compute-0 NetworkManager[56317]: <info>  [1764068693.4586] device (tapa1692084-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:04:53 compute-0 NetworkManager[56317]: <info>  [1764068693.4600] device (tapa1692084-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:04:53 compute-0 podman[255030]: 2025-11-25 11:04:53.461022305 +0000 UTC m=+0.104085753 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.479 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[f972b361-5ea6-4ec2-80d1-5a5f3979dbc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.487 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[da3032c2-021c-49aa-b334-d6747b3471f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 systemd-udevd[255083]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:04:53 compute-0 NetworkManager[56317]: <info>  [1764068693.4882] manager: (tapd737f52f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.512 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.520 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[13683176-3985-494c-a988-61f67746a4a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.523 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a9a141-c873-4d91-8985-3c4e6419bd12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 NetworkManager[56317]: <info>  [1764068693.5453] device (tapd737f52f-90): carrier: link connected
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.557 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[e919b36f-d5e8-4b39-b5fd-9ad89a51c12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.578 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6ee0f2ad-4dfd-4da7-9bc6-1843e281cc3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd737f52f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:73:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560112, 'reachable_time': 24971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255110, 'error': None, 'target': 'ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.598 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[02f2f3ad-5487-4aa7-a6e4-449cae69d464]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:735c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560112, 'tstamp': 560112}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255111, 'error': None, 'target': 'ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.618 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[f8612843-018d-4899-b770-9c777b4dec50]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd737f52f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:73:5c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560112, 'reachable_time': 24971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255112, 'error': None, 'target': 'ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.662 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[5c04db15-e182-496e-b29c-138b369b993f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.723 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[07dcecbc-cdd2-4b65-af76-653d7d8e2177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.724 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd737f52f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.724 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.725 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd737f52f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.726 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 NetworkManager[56317]: <info>  [1764068693.7273] manager: (tapd737f52f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 25 11:04:53 compute-0 kernel: tapd737f52f-90: entered promiscuous mode
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.729 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.731 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd737f52f-90, col_values=(('external_ids', {'iface-id': 'b8fb9b1a-a4e5-4595-b2cf-3654dda153c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.733 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 ovn_controller[97779]: 2025-11-25T11:04:53Z|00139|binding|INFO|Releasing lport b8fb9b1a-a4e5-4595-b2cf-3654dda153c0 from this chassis (sb_readonly=0)
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.735 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d737f52f-9bd2-4fa0-b695-15c08aea25ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d737f52f-9bd2-4fa0-b695-15c08aea25ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.736 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[adf7ce62-d144-46c2-bbd3-faeaa225b7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.737 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-d737f52f-9bd2-4fa0-b695-15c08aea25ba
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/d737f52f-9bd2-4fa0-b695-15c08aea25ba.pid.haproxy
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID d737f52f-9bd2-4fa0-b695-15c08aea25ba
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:04:53 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:04:53.738 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'env', 'PROCESS_TAG=haproxy-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d737f52f-9bd2-4fa0-b695-15c08aea25ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.753 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.821 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068693.820726, 709ba638-65f8-4345-b8ca-b969e9719f92 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.821 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] VM Started (Lifecycle Event)
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.848 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.854 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068693.820862, 709ba638-65f8-4345-b8ca-b969e9719f92 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.856 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] VM Paused (Lifecycle Event)
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.875 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.883 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:04:53 compute-0 nova_compute[189381]: 2025-11-25 11:04:53.906 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:04:54 compute-0 podman[255151]: 2025-11-25 11:04:54.106796868 +0000 UTC m=+0.027063345 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:04:54 compute-0 nova_compute[189381]: 2025-11-25 11:04:54.285 189385 DEBUG nova.network.neutron [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Updated VIF entry in instance network info cache for port a1692084-6415-42ca-acb4-a814c874f56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:04:54 compute-0 nova_compute[189381]: 2025-11-25 11:04:54.286 189385 DEBUG nova.network.neutron [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Updating instance_info_cache with network_info: [{"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:04:54 compute-0 nova_compute[189381]: 2025-11-25 11:04:54.298 189385 DEBUG oslo_concurrency.lockutils [req-09a5d394-e3ca-4126-a943-1b2ece470c91 req-4d9ca081-4707-4756-8fae-29481be5c18e d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:04:54 compute-0 podman[255151]: 2025-11-25 11:04:54.401114296 +0000 UTC m=+0.321380743 container create f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:04:54 compute-0 systemd[1]: Started libpod-conmon-f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46.scope.
Nov 25 11:04:54 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57eb52b4f22453394ad4f550051f1ae94f6eaf03c4b7abcb5edb05ed950dad85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:04:54 compute-0 podman[255151]: 2025-11-25 11:04:54.730855821 +0000 UTC m=+0.651122278 container init f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:04:54 compute-0 podman[255151]: 2025-11-25 11:04:54.737639017 +0000 UTC m=+0.657905464 container start f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 25 11:04:54 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [NOTICE]   (255169) : New worker (255171) forked
Nov 25 11:04:54 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [NOTICE]   (255169) : Loading success.
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.501 189385 DEBUG nova.compute.manager [req-ddbba5c7-c3f4-4de7-83c7-8b9bc465bf46 req-4caa7550-2a8e-4c41-88f3-4cfad29d6d5b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.503 189385 DEBUG oslo_concurrency.lockutils [req-ddbba5c7-c3f4-4de7-83c7-8b9bc465bf46 req-4caa7550-2a8e-4c41-88f3-4cfad29d6d5b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.503 189385 DEBUG oslo_concurrency.lockutils [req-ddbba5c7-c3f4-4de7-83c7-8b9bc465bf46 req-4caa7550-2a8e-4c41-88f3-4cfad29d6d5b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.503 189385 DEBUG oslo_concurrency.lockutils [req-ddbba5c7-c3f4-4de7-83c7-8b9bc465bf46 req-4caa7550-2a8e-4c41-88f3-4cfad29d6d5b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.504 189385 DEBUG nova.compute.manager [req-ddbba5c7-c3f4-4de7-83c7-8b9bc465bf46 req-4caa7550-2a8e-4c41-88f3-4cfad29d6d5b d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Processing event network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.504 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.509 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068695.5089579, 709ba638-65f8-4345-b8ca-b969e9719f92 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.510 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] VM Resumed (Lifecycle Event)
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.513 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.519 189385 INFO nova.virt.libvirt.driver [-] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Instance spawned successfully.
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.519 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.531 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.544 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.549 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.549 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.550 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.551 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.551 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.552 189385 DEBUG nova.virt.libvirt.driver [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.582 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.615 189385 DEBUG nova.compute.manager [req-be4621c3-1fec-49a7-9a4c-65a8fe622613 req-75bd6b1c-b431-4d51-9426-2644d6519cb2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received event network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.616 189385 DEBUG oslo_concurrency.lockutils [req-be4621c3-1fec-49a7-9a4c-65a8fe622613 req-75bd6b1c-b431-4d51-9426-2644d6519cb2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.616 189385 DEBUG oslo_concurrency.lockutils [req-be4621c3-1fec-49a7-9a4c-65a8fe622613 req-75bd6b1c-b431-4d51-9426-2644d6519cb2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.617 189385 DEBUG oslo_concurrency.lockutils [req-be4621c3-1fec-49a7-9a4c-65a8fe622613 req-75bd6b1c-b431-4d51-9426-2644d6519cb2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.617 189385 DEBUG nova.compute.manager [req-be4621c3-1fec-49a7-9a4c-65a8fe622613 req-75bd6b1c-b431-4d51-9426-2644d6519cb2 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Processing event network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.618 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Instance event wait completed in 13 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.634 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068695.6237323, 18a30ced-09e6-4c6a-9ea3-4c59f437a71a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.635 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] VM Resumed (Lifecycle Event)
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.645 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.666 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.669 189385 INFO nova.virt.libvirt.driver [-] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Instance spawned successfully.
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.669 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.677 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.765 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.781 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.782 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.782 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.783 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.783 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.784 189385 DEBUG nova.virt.libvirt.driver [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.932 189385 INFO nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Took 22.93 seconds to spawn the instance on the hypervisor.
Nov 25 11:04:55 compute-0 nova_compute[189381]: 2025-11-25 11:04:55.933 189385 DEBUG nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:56 compute-0 nova_compute[189381]: 2025-11-25 11:04:56.024 189385 INFO nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Took 33.56 seconds to spawn the instance on the hypervisor.
Nov 25 11:04:56 compute-0 nova_compute[189381]: 2025-11-25 11:04:56.025 189385 DEBUG nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:04:56 compute-0 nova_compute[189381]: 2025-11-25 11:04:56.187 189385 INFO nova.compute.manager [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Took 24.67 seconds to build instance.
Nov 25 11:04:56 compute-0 nova_compute[189381]: 2025-11-25 11:04:56.202 189385 INFO nova.compute.manager [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Took 36.01 seconds to build instance.
Nov 25 11:04:56 compute-0 nova_compute[189381]: 2025-11-25 11:04:56.261 189385 DEBUG oslo_concurrency.lockutils [None req-508b32ee-a5eb-4e00-bec2-f8e890bf9c81 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:56 compute-0 nova_compute[189381]: 2025-11-25 11:04:56.323 189385 DEBUG oslo_concurrency.lockutils [None req-1f8cb552-6680-4a10-b2fa-0b963145af21 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 36.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:57 compute-0 nova_compute[189381]: 2025-11-25 11:04:57.028 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:04:57 compute-0 nova_compute[189381]: 2025-11-25 11:04:57.213 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:57 compute-0 podman[255182]: 2025-11-25 11:04:57.977912567 +0000 UTC m=+0.079705288 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:04:58 compute-0 podman[255181]: 2025-11-25 11:04:58.041086366 +0000 UTC m=+0.145031859 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.516 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.535 189385 DEBUG nova.compute.manager [req-f390ad51-6323-4f60-9493-193cbc37a568 req-0883d523-e3e9-4b20-a40d-c38b5321c5f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.536 189385 DEBUG oslo_concurrency.lockutils [req-f390ad51-6323-4f60-9493-193cbc37a568 req-0883d523-e3e9-4b20-a40d-c38b5321c5f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.536 189385 DEBUG oslo_concurrency.lockutils [req-f390ad51-6323-4f60-9493-193cbc37a568 req-0883d523-e3e9-4b20-a40d-c38b5321c5f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.537 189385 DEBUG oslo_concurrency.lockutils [req-f390ad51-6323-4f60-9493-193cbc37a568 req-0883d523-e3e9-4b20-a40d-c38b5321c5f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.537 189385 DEBUG nova.compute.manager [req-f390ad51-6323-4f60-9493-193cbc37a568 req-0883d523-e3e9-4b20-a40d-c38b5321c5f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] No waiting events found dispatching network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.537 189385 WARNING nova.compute.manager [req-f390ad51-6323-4f60-9493-193cbc37a568 req-0883d523-e3e9-4b20-a40d-c38b5321c5f8 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received unexpected event network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a for instance with vm_state active and task_state None.
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.619 189385 DEBUG nova.compute.manager [req-7de4932a-a56f-4269-9364-1b446b2e5def req-c909b822-dc0f-4502-9290-238374445850 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received event network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.620 189385 DEBUG oslo_concurrency.lockutils [req-7de4932a-a56f-4269-9364-1b446b2e5def req-c909b822-dc0f-4502-9290-238374445850 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.621 189385 DEBUG oslo_concurrency.lockutils [req-7de4932a-a56f-4269-9364-1b446b2e5def req-c909b822-dc0f-4502-9290-238374445850 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.621 189385 DEBUG oslo_concurrency.lockutils [req-7de4932a-a56f-4269-9364-1b446b2e5def req-c909b822-dc0f-4502-9290-238374445850 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.621 189385 DEBUG nova.compute.manager [req-7de4932a-a56f-4269-9364-1b446b2e5def req-c909b822-dc0f-4502-9290-238374445850 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] No waiting events found dispatching network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:04:58 compute-0 nova_compute[189381]: 2025-11-25 11:04:58.622 189385 WARNING nova.compute.manager [req-7de4932a-a56f-4269-9364-1b446b2e5def req-c909b822-dc0f-4502-9290-238374445850 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received unexpected event network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 for instance with vm_state active and task_state None.
Nov 25 11:04:59 compute-0 NetworkManager[56317]: <info>  [1764068699.6461] manager: (patch-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 25 11:04:59 compute-0 nova_compute[189381]: 2025-11-25 11:04:59.646 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:59 compute-0 NetworkManager[56317]: <info>  [1764068699.6478] manager: (patch-br-int-to-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 25 11:04:59 compute-0 podman[203557]: time="2025-11-25T11:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:04:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Nov 25 11:04:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5259 "" "Go-http-client/1.1"
Nov 25 11:04:59 compute-0 nova_compute[189381]: 2025-11-25 11:04:59.861 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:04:59 compute-0 ovn_controller[97779]: 2025-11-25T11:04:59Z|00140|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:04:59 compute-0 ovn_controller[97779]: 2025-11-25T11:04:59Z|00141|binding|INFO|Releasing lport b8fb9b1a-a4e5-4595-b2cf-3654dda153c0 from this chassis (sb_readonly=0)
Nov 25 11:04:59 compute-0 nova_compute[189381]: 2025-11-25 11:04:59.894 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:01 compute-0 openstack_network_exporter[205722]: ERROR   11:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:05:01 compute-0 openstack_network_exporter[205722]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:05:01 compute-0 openstack_network_exporter[205722]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:05:01 compute-0 openstack_network_exporter[205722]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:05:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:05:01 compute-0 openstack_network_exporter[205722]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:05:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:05:01 compute-0 nova_compute[189381]: 2025-11-25 11:05:01.666 189385 DEBUG nova.compute.manager [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-changed-a1692084-6415-42ca-acb4-a814c874f56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:01 compute-0 nova_compute[189381]: 2025-11-25 11:05:01.666 189385 DEBUG nova.compute.manager [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Refreshing instance network info cache due to event network-changed-a1692084-6415-42ca-acb4-a814c874f56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:05:01 compute-0 nova_compute[189381]: 2025-11-25 11:05:01.667 189385 DEBUG oslo_concurrency.lockutils [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:05:01 compute-0 nova_compute[189381]: 2025-11-25 11:05:01.667 189385 DEBUG oslo_concurrency.lockutils [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:05:01 compute-0 nova_compute[189381]: 2025-11-25 11:05:01.667 189385 DEBUG nova.network.neutron [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Refreshing network info cache for port a1692084-6415-42ca-acb4-a814c874f56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:05:01 compute-0 podman[255223]: 2025-11-25 11:05:01.951708179 +0000 UTC m=+0.067189836 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.216 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.484 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.485 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.485 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.485 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.486 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.487 189385 INFO nova.compute.manager [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Terminating instance
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.488 189385 DEBUG nova.compute.manager [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:05:02 compute-0 kernel: tapa1692084-64 (unregistering): left promiscuous mode
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.535 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 ovn_controller[97779]: 2025-11-25T11:05:02Z|00142|binding|INFO|Releasing lport a1692084-6415-42ca-acb4-a814c874f56a from this chassis (sb_readonly=0)
Nov 25 11:05:02 compute-0 ovn_controller[97779]: 2025-11-25T11:05:02Z|00143|binding|INFO|Setting lport a1692084-6415-42ca-acb4-a814c874f56a down in Southbound
Nov 25 11:05:02 compute-0 NetworkManager[56317]: <info>  [1764068702.5406] device (tapa1692084-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:05:02 compute-0 ovn_controller[97779]: 2025-11-25T11:05:02Z|00144|binding|INFO|Removing iface tapa1692084-64 ovn-installed in OVS
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.556 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.561 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 25 11:05:02 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 7.536s CPU time.
Nov 25 11:05:02 compute-0 systemd-machined[155706]: Machine qemu-12-instance-0000000b terminated.
Nov 25 11:05:02 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:02.616 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:99:84 10.100.0.14'], port_security=['fa:16:3e:25:99:84 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '709ba638-65f8-4345-b8ca-b969e9719f92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2013a3a878cf48c19ee356b2eb249216', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f01b6e2c-f9cc-4aa3-addf-dc4f86a1ec40', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bb5d16f2-4a4d-461a-be64-340216e2f14c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=a1692084-6415-42ca-acb4-a814c874f56a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:05:02 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:02.617 106634 INFO neutron.agent.ovn.metadata.agent [-] Port a1692084-6415-42ca-acb4-a814c874f56a in datapath d737f52f-9bd2-4fa0-b695-15c08aea25ba unbound from our chassis
Nov 25 11:05:02 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:02.619 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d737f52f-9bd2-4fa0-b695-15c08aea25ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:05:02 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:02.620 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e365ff68-692f-4344-9b36-c53f85add3e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:02 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:02.621 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba namespace which is not needed anymore
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.717 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.723 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.762 189385 INFO nova.virt.libvirt.driver [-] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Instance destroyed successfully.
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.763 189385 DEBUG nova.objects.instance [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lazy-loading 'resources' on Instance uuid 709ba638-65f8-4345-b8ca-b969e9719f92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.779 189385 DEBUG nova.virt.libvirt.vif [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:04:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-302335864',display_name='tempest-ServersTestManualDisk-server-302335864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-302335864',id=11,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP7VfsknKdPGSeYQPwpNA8eRPA5K3rJY2apsdPtpmPbd1OcEsvJFk+7j2c/rIkrzWInP/ugRSYoulK3pMe/yztCughmVNc4bMj9IfCCNbRDUmbY13nBEkqFLtcUTz5NLHA==',key_name='tempest-keypair-534650904',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:04:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2013a3a878cf48c19ee356b2eb249216',ramdisk_id='',reservation_id='r-5865stpx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1517765642',owner_user_name='tempest-ServersTestManualDisk-1517765642-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='63532fa3761a42a3a6f2dbb256ccd5d1',uuid=709ba638-65f8-4345-b8ca-b969e9719f92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.780 189385 DEBUG nova.network.os_vif_util [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Converting VIF {"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.781 189385 DEBUG nova.network.os_vif_util [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:99:84,bridge_name='br-int',has_traffic_filtering=True,id=a1692084-6415-42ca-acb4-a814c874f56a,network=Network(d737f52f-9bd2-4fa0-b695-15c08aea25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1692084-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.782 189385 DEBUG os_vif [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:99:84,bridge_name='br-int',has_traffic_filtering=True,id=a1692084-6415-42ca-acb4-a814c874f56a,network=Network(d737f52f-9bd2-4fa0-b695-15c08aea25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1692084-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.784 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.784 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa1692084-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.786 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.788 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.791 189385 INFO os_vif [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:99:84,bridge_name='br-int',has_traffic_filtering=True,id=a1692084-6415-42ca-acb4-a814c874f56a,network=Network(d737f52f-9bd2-4fa0-b695-15c08aea25ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa1692084-64')
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.792 189385 INFO nova.virt.libvirt.driver [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Deleting instance files /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92_del
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.793 189385 INFO nova.virt.libvirt.driver [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Deletion of /var/lib/nova/instances/709ba638-65f8-4345-b8ca-b969e9719f92_del complete
Nov 25 11:05:02 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [NOTICE]   (255169) : haproxy version is 2.8.14-c23fe91
Nov 25 11:05:02 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [NOTICE]   (255169) : path to executable is /usr/sbin/haproxy
Nov 25 11:05:02 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [WARNING]  (255169) : Exiting Master process...
Nov 25 11:05:02 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [WARNING]  (255169) : Exiting Master process...
Nov 25 11:05:02 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [ALERT]    (255169) : Current worker (255171) exited with code 143 (Terminated)
Nov 25 11:05:02 compute-0 neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba[255165]: [WARNING]  (255169) : All workers exited. Exiting... (0)
Nov 25 11:05:02 compute-0 systemd[1]: libpod-f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46.scope: Deactivated successfully.
Nov 25 11:05:02 compute-0 podman[255272]: 2025-11-25 11:05:02.86106019 +0000 UTC m=+0.130235301 container died f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.865 189385 INFO nova.compute.manager [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Took 0.38 seconds to destroy the instance on the hypervisor.
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.866 189385 DEBUG oslo.service.loopingcall [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.867 189385 DEBUG nova.compute.manager [-] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:05:02 compute-0 nova_compute[189381]: 2025-11-25 11:05:02.867 189385 DEBUG nova.network.neutron [-] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46-userdata-shm.mount: Deactivated successfully.
Nov 25 11:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-57eb52b4f22453394ad4f550051f1ae94f6eaf03c4b7abcb5edb05ed950dad85-merged.mount: Deactivated successfully.
Nov 25 11:05:03 compute-0 podman[255272]: 2025-11-25 11:05:03.209699622 +0000 UTC m=+0.478874733 container cleanup f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 11:05:03 compute-0 systemd[1]: libpod-conmon-f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46.scope: Deactivated successfully.
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.341 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.342 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.348 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.350 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/18a30ced-09e6-4c6a-9ea3-4c59f437a71a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:05:03 compute-0 podman[255316]: 2025-11-25 11:05:03.469639716 +0000 UTC m=+0.231865103 container remove f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.478 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[c22e0a3e-275f-4461-82ec-4693fc1124e8]: (4, ('Tue Nov 25 11:05:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba (f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46)\nf6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46\nTue Nov 25 11:05:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba (f6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46)\nf6fdc05da558c31a1fae2b3c9175a01e041e79884a6b137e3b6d541a5da5db46\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.480 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0cbfe0e1-677f-463e-8456-3587fae436da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.481 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd737f52f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.483 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:03 compute-0 kernel: tapd737f52f-90: left promiscuous mode
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.486 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.489 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[88a99fed-66e6-4eb6-8716-efdbbcee9c50]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.512 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.513 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7b00c373-085c-4217-b704-76917e4143a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.515 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[37b14cb0-435a-449b-b16d-83375d1d1f9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.517 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.530 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[06b974b3-ffdc-418c-8768-ce729942de9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560105, 'reachable_time': 29826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255331, 'error': None, 'target': 'ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.533 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d737f52f-9bd2-4fa0-b695-15c08aea25ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:05:03 compute-0 systemd[1]: run-netns-ovnmeta\x2dd737f52f\x2d9bd2\x2d4fa0\x2db695\x2d15c08aea25ba.mount: Deactivated successfully.
Nov 25 11:05:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:03.533 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[68aeda22-09ca-4bc9-a57d-0b47a64961df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.858 189385 DEBUG nova.compute.manager [req-b4f2b58b-e8c9-4ade-8c37-769ecb626146 req-6557c16b-26e8-4fb4-9502-e779160a6f83 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-vif-unplugged-a1692084-6415-42ca-acb4-a814c874f56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.859 189385 DEBUG oslo_concurrency.lockutils [req-b4f2b58b-e8c9-4ade-8c37-769ecb626146 req-6557c16b-26e8-4fb4-9502-e779160a6f83 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.860 189385 DEBUG oslo_concurrency.lockutils [req-b4f2b58b-e8c9-4ade-8c37-769ecb626146 req-6557c16b-26e8-4fb4-9502-e779160a6f83 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.860 189385 DEBUG oslo_concurrency.lockutils [req-b4f2b58b-e8c9-4ade-8c37-769ecb626146 req-6557c16b-26e8-4fb4-9502-e779160a6f83 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.860 189385 DEBUG nova.compute.manager [req-b4f2b58b-e8c9-4ade-8c37-769ecb626146 req-6557c16b-26e8-4fb4-9502-e779160a6f83 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] No waiting events found dispatching network-vif-unplugged-a1692084-6415-42ca-acb4-a814c874f56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:05:03 compute-0 nova_compute[189381]: 2025-11-25 11:05:03.860 189385 DEBUG nova.compute.manager [req-b4f2b58b-e8c9-4ade-8c37-769ecb626146 req-6557c16b-26e8-4fb4-9502-e779160a6f83 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-vif-unplugged-a1692084-6415-42ca-acb4-a814c874f56a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.921 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Tue, 25 Nov 2025 11:05:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-705072e9-befb-4392-b502-6ca70c392dcc x-openstack-request-id: req-705072e9-befb-4392-b502-6ca70c392dcc _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.922 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "18a30ced-09e6-4c6a-9ea3-4c59f437a71a", "name": "te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4", "status": "ACTIVE", "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "user_id": "95acdf386c1e42c8a6da1f7b9603054f", "metadata": {"metering.server_group": "f33016ec-000f-44cf-b7cc-2122723ba143"}, "hostId": "70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162", "image": {"id": "62ab6b08-ec10-4838-aa81-24150af36537", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/62ab6b08-ec10-4838-aa81-24150af36537"}]}, "flavor": {"id": "b7c0626e-febc-4083-b621-6f5ee0740a18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b7c0626e-febc-4083-b621-6f5ee0740a18"}]}, "created": "2025-11-25T11:04:14Z", "updated": "2025-11-25T11:04:56Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:fd:bc:05"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/18a30ced-09e6-4c6a-9ea3-4c59f437a71a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/18a30ced-09e6-4c6a-9ea3-4c59f437a71a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-25T11:04:56.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.923 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/18a30ced-09e6-4c6a-9ea3-4c59f437a71a used request id req-705072e9-befb-4392-b502-6ca70c392dcc request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.924 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'name': 'te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.925 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.925 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.926 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.926 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:05:04.926387) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.930 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 18a30ced-09e6-4c6a-9ea3-4c59f437a71a / tap6ed45132-26 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.930 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.931 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.932 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.932 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.932 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.933 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.933 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.934 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:05:04.933713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.934 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.935 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.936 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:05:04.936680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.968 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.969 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a: ceilometer.compute.pollsters.NoVolumeException
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.970 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.970 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.971 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.971 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.971 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T11:05:04.971397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.971 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.972 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>]
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.973 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.973 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.973 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.974 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.974 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:05:04.974346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.974 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.975 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.975 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.975 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.975 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.976 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.976 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.976 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:05:04.976313) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.977 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/cpu volume: 9080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.978 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.979 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.980 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:05:04.977701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:05:04.978455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:05:04.980027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:04.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:05:04.982101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.004 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.005 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.006 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.007 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:05:05.007224) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.049 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.050 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.050 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.051 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.051 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.051 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.052 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 1385236440 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.052 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:05:05.052182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.052 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 2471921 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.053 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.054 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:05:05.054401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.054 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.055 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.056 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:05:05.056748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.057 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.057 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.057 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.058 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.059 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:05:05.059213) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.059 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.060 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.060 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.061 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.061 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:05:05.061567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.061 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.062 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.063 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.063 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:05:05.063803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.064 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.065 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.065 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.065 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:05:05.065604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.065 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.066 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.066 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:05:05.067794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.068 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.068 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.069 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:05:05.069649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.070 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T11:05:05.071777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.072 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.072 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>]
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:05:05.073669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.074 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.075 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:05:05.075427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.075 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:05:05.077410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:05:05.079041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.079 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:05:05.080893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.081 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:05 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:05:05.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.059 189385 DEBUG nova.compute.manager [req-8e6eb210-2873-4ded-bd2b-6960e812a9d8 req-9f08a398-d5f3-48bd-bebb-1d40e7826ebd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.060 189385 DEBUG oslo_concurrency.lockutils [req-8e6eb210-2873-4ded-bd2b-6960e812a9d8 req-9f08a398-d5f3-48bd-bebb-1d40e7826ebd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.061 189385 DEBUG oslo_concurrency.lockutils [req-8e6eb210-2873-4ded-bd2b-6960e812a9d8 req-9f08a398-d5f3-48bd-bebb-1d40e7826ebd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.061 189385 DEBUG oslo_concurrency.lockutils [req-8e6eb210-2873-4ded-bd2b-6960e812a9d8 req-9f08a398-d5f3-48bd-bebb-1d40e7826ebd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.062 189385 DEBUG nova.compute.manager [req-8e6eb210-2873-4ded-bd2b-6960e812a9d8 req-9f08a398-d5f3-48bd-bebb-1d40e7826ebd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] No waiting events found dispatching network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.063 189385 WARNING nova.compute.manager [req-8e6eb210-2873-4ded-bd2b-6960e812a9d8 req-9f08a398-d5f3-48bd-bebb-1d40e7826ebd d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received unexpected event network-vif-plugged-a1692084-6415-42ca-acb4-a814c874f56a for instance with vm_state active and task_state deleting.
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.279 189385 DEBUG nova.network.neutron [-] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.316 189385 INFO nova.compute.manager [-] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Took 3.45 seconds to deallocate network for instance.
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.382 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.383 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.387 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.595 189385 DEBUG nova.compute.provider_tree [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.615 189385 DEBUG nova.scheduler.client.report [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.686 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.786 189385 DEBUG nova.compute.manager [req-21bbafad-a5c6-4cf6-830b-f37871c808ca req-e82317ac-4f16-4707-b3a3-3d0f02f785b4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Received event network-vif-deleted-a1692084-6415-42ca-acb4-a814c874f56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.798 189385 INFO nova.scheduler.client.report [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Deleted allocations for instance 709ba638-65f8-4345-b8ca-b969e9719f92
Nov 25 11:05:06 compute-0 nova_compute[189381]: 2025-11-25 11:05:06.960 189385 DEBUG oslo_concurrency.lockutils [None req-bb19196c-821f-4c82-a6a3-ef1a6583ea0e 63532fa3761a42a3a6f2dbb256ccd5d1 2013a3a878cf48c19ee356b2eb249216 - - default default] Lock "709ba638-65f8-4345-b8ca-b969e9719f92" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:07 compute-0 nova_compute[189381]: 2025-11-25 11:05:07.357 189385 DEBUG nova.network.neutron [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Updated VIF entry in instance network info cache for port a1692084-6415-42ca-acb4-a814c874f56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:05:07 compute-0 nova_compute[189381]: 2025-11-25 11:05:07.358 189385 DEBUG nova.network.neutron [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Updating instance_info_cache with network_info: [{"id": "a1692084-6415-42ca-acb4-a814c874f56a", "address": "fa:16:3e:25:99:84", "network": {"id": "d737f52f-9bd2-4fa0-b695-15c08aea25ba", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-649928792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2013a3a878cf48c19ee356b2eb249216", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa1692084-64", "ovs_interfaceid": "a1692084-6415-42ca-acb4-a814c874f56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:05:07 compute-0 nova_compute[189381]: 2025-11-25 11:05:07.379 189385 DEBUG oslo_concurrency.lockutils [req-11b29044-90b1-4d4a-8b2e-5efa2ae005a3 req-bea970c9-afb0-4525-8dda-6632429ba8bf d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-709ba638-65f8-4345-b8ca-b969e9719f92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:05:07 compute-0 nova_compute[189381]: 2025-11-25 11:05:07.788 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:08 compute-0 nova_compute[189381]: 2025-11-25 11:05:08.519 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:09 compute-0 nova_compute[189381]: 2025-11-25 11:05:09.162 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:10 compute-0 podman[255332]: 2025-11-25 11:05:10.955773082 +0000 UTC m=+0.071228283 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 25 11:05:10 compute-0 podman[255333]: 2025-11-25 11:05:10.978904821 +0000 UTC m=+0.092517639 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:05:12 compute-0 nova_compute[189381]: 2025-11-25 11:05:12.793 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:13 compute-0 nova_compute[189381]: 2025-11-25 11:05:13.522 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:13 compute-0 podman[255368]: 2025-11-25 11:05:13.969340299 +0000 UTC m=+0.079168712 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 11:05:17 compute-0 nova_compute[189381]: 2025-11-25 11:05:17.761 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068702.7590318, 709ba638-65f8-4345-b8ca-b969e9719f92 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:05:17 compute-0 nova_compute[189381]: 2025-11-25 11:05:17.762 189385 INFO nova.compute.manager [-] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] VM Stopped (Lifecycle Event)
Nov 25 11:05:17 compute-0 nova_compute[189381]: 2025-11-25 11:05:17.790 189385 DEBUG nova.compute.manager [None req-93b25551-92fa-41cb-a782-5ab5d390e938 - - - - - -] [instance: 709ba638-65f8-4345-b8ca-b969e9719f92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:05:17 compute-0 nova_compute[189381]: 2025-11-25 11:05:17.797 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:18 compute-0 nova_compute[189381]: 2025-11-25 11:05:18.526 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:19 compute-0 ovn_controller[97779]: 2025-11-25T11:05:19Z|00145|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:05:19 compute-0 nova_compute[189381]: 2025-11-25 11:05:19.469 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:19 compute-0 ovn_controller[97779]: 2025-11-25T11:05:19Z|00146|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:05:19 compute-0 nova_compute[189381]: 2025-11-25 11:05:19.767 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:20 compute-0 podman[255391]: 2025-11-25 11:05:20.942150898 +0000 UTC m=+0.059662888 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 25 11:05:22 compute-0 nova_compute[189381]: 2025-11-25 11:05:22.800 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:23 compute-0 nova_compute[189381]: 2025-11-25 11:05:23.528 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:23 compute-0 podman[255408]: 2025-11-25 11:05:23.958177428 +0000 UTC m=+0.071087596 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 25 11:05:23 compute-0 podman[255409]: 2025-11-25 11:05:23.974804486 +0000 UTC m=+0.078146869 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 11:05:27 compute-0 nova_compute[189381]: 2025-11-25 11:05:27.805 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:28 compute-0 nova_compute[189381]: 2025-11-25 11:05:28.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:28 compute-0 nova_compute[189381]: 2025-11-25 11:05:28.490 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:28 compute-0 nova_compute[189381]: 2025-11-25 11:05:28.491 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:28 compute-0 nova_compute[189381]: 2025-11-25 11:05:28.529 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:28 compute-0 nova_compute[189381]: 2025-11-25 11:05:28.532 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:05:28 compute-0 podman[255453]: 2025-11-25 11:05:28.968293714 +0000 UTC m=+0.079352103 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:29 compute-0 podman[255452]: 2025-11-25 11:05:29.021970588 +0000 UTC m=+0.132017348 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118)
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.074 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.074 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.097 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.098 189385 INFO nova.compute.claims [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.369 189385 DEBUG nova.compute.provider_tree [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.383 189385 DEBUG nova.scheduler.client.report [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.423 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.424 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.504 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.505 189385 DEBUG nova.network.neutron [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.534 189385 INFO nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.570 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:05:29 compute-0 podman[203557]: time="2025-11-25T11:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:05:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:05:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.817 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.819 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.820 189385 INFO nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Creating image(s)
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.821 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "/var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.822 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "/var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.823 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "/var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.843 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.911 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.913 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.914 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.927 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.946 189385 DEBUG nova.policy [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97d307f20103434babe2431661f5bbdb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89069d3ee96a4fd493232b094a94877d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.984 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:29 compute-0 nova_compute[189381]: 2025-11-25 11:05:29.986 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.040 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.041 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.042 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.102 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.103 189385 DEBUG nova.virt.disk.api [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Checking if we can resize image /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.103 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.161 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.162 189385 DEBUG nova.virt.disk.api [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Cannot resize image /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.162 189385 DEBUG nova.objects.instance [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lazy-loading 'migration_context' on Instance uuid b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.183 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.184 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Ensure instance console log exists: /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.184 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.185 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.185 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.799 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.799 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:30 compute-0 nova_compute[189381]: 2025-11-25 11:05:30.880 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:05:31 compute-0 nova_compute[189381]: 2025-11-25 11:05:31.344 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:31 compute-0 nova_compute[189381]: 2025-11-25 11:05:31.344 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:31 compute-0 nova_compute[189381]: 2025-11-25 11:05:31.352 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:05:31 compute-0 nova_compute[189381]: 2025-11-25 11:05:31.353 189385 INFO nova.compute.claims [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:05:31 compute-0 openstack_network_exporter[205722]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:05:31 compute-0 openstack_network_exporter[205722]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:05:31 compute-0 openstack_network_exporter[205722]: ERROR   11:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:05:31 compute-0 openstack_network_exporter[205722]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:05:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:05:31 compute-0 openstack_network_exporter[205722]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:05:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:05:31 compute-0 nova_compute[189381]: 2025-11-25 11:05:31.568 189385 DEBUG nova.compute.provider_tree [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:05:31 compute-0 nova_compute[189381]: 2025-11-25 11:05:31.583 189385 DEBUG nova.scheduler.client.report [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:05:32 compute-0 nova_compute[189381]: 2025-11-25 11:05:32.808 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:32 compute-0 nova_compute[189381]: 2025-11-25 11:05:32.826 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:32 compute-0 nova_compute[189381]: 2025-11-25 11:05:32.827 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:05:32 compute-0 podman[255523]: 2025-11-25 11:05:32.972187987 +0000 UTC m=+0.087924560 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:05:33 compute-0 ovn_controller[97779]: 2025-11-25T11:05:33Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:bc:05 10.100.2.10
Nov 25 11:05:33 compute-0 ovn_controller[97779]: 2025-11-25T11:05:33Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:bc:05 10.100.2.10
Nov 25 11:05:33 compute-0 nova_compute[189381]: 2025-11-25 11:05:33.533 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:33 compute-0 nova_compute[189381]: 2025-11-25 11:05:33.719 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:05:33 compute-0 nova_compute[189381]: 2025-11-25 11:05:33.719 189385 DEBUG nova.network.neutron [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:05:33 compute-0 nova_compute[189381]: 2025-11-25 11:05:33.806 189385 INFO nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:05:33 compute-0 nova_compute[189381]: 2025-11-25 11:05:33.895 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.045 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.045 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.045 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.046 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.145 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.206 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.207 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.267 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.574 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.575 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.576 189385 INFO nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Creating image(s)
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.576 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "/var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.576 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "/var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.577 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "/var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.590 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.651 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.652 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.653 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.664 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.679 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.680 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5167MB free_disk=72.1005859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.680 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.681 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.720 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.721 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.759 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.760 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.761 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.826 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.827 189385 DEBUG nova.virt.disk.api [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Checking if we can resize image /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.828 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.888 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.889 189385 DEBUG nova.virt.disk.api [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Cannot resize image /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.889 189385 DEBUG nova.objects.instance [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lazy-loading 'migration_context' on Instance uuid 74072f60-1884-462d-9a69-28925a67978d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.900 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.901 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Ensure instance console log exists: /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.901 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.902 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:34 compute-0 nova_compute[189381]: 2025-11-25 11:05:34.902 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:35 compute-0 nova_compute[189381]: 2025-11-25 11:05:35.250 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:05:35 compute-0 nova_compute[189381]: 2025-11-25 11:05:35.251 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:05:35 compute-0 nova_compute[189381]: 2025-11-25 11:05:35.251 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 74072f60-1884-462d-9a69-28925a67978d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:05:35 compute-0 nova_compute[189381]: 2025-11-25 11:05:35.251 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:05:35 compute-0 nova_compute[189381]: 2025-11-25 11:05:35.252 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:05:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:36.071 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:36.072 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:36.072 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:36.463 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:05:36 compute-0 nova_compute[189381]: 2025-11-25 11:05:36.463 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:36.465 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:05:37 compute-0 nova_compute[189381]: 2025-11-25 11:05:37.754 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:05:37 compute-0 nova_compute[189381]: 2025-11-25 11:05:37.776 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:05:37 compute-0 nova_compute[189381]: 2025-11-25 11:05:37.812 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:37 compute-0 nova_compute[189381]: 2025-11-25 11:05:37.844 189385 DEBUG nova.policy [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '09f4a560d6494ec3aa4e1a291f7917c1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6daca89a9f274580a80130a94ea91f45', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:05:37 compute-0 nova_compute[189381]: 2025-11-25 11:05:37.867 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:05:37 compute-0 nova_compute[189381]: 2025-11-25 11:05:37.868 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:38 compute-0 nova_compute[189381]: 2025-11-25 11:05:38.043 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:38 compute-0 nova_compute[189381]: 2025-11-25 11:05:38.044 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:38 compute-0 nova_compute[189381]: 2025-11-25 11:05:38.044 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:05:38 compute-0 nova_compute[189381]: 2025-11-25 11:05:38.045 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:05:38 compute-0 nova_compute[189381]: 2025-11-25 11:05:38.084 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 11:05:38 compute-0 nova_compute[189381]: 2025-11-25 11:05:38.085 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 25 11:05:38 compute-0 nova_compute[189381]: 2025-11-25 11:05:38.536 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:39 compute-0 nova_compute[189381]: 2025-11-25 11:05:39.234 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:05:39 compute-0 nova_compute[189381]: 2025-11-25 11:05:39.235 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:05:39 compute-0 nova_compute[189381]: 2025-11-25 11:05:39.235 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:05:39 compute-0 nova_compute[189381]: 2025-11-25 11:05:39.235 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:05:40 compute-0 nova_compute[189381]: 2025-11-25 11:05:40.449 189385 DEBUG nova.network.neutron [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Successfully created port: e66646b4-49f7-478f-a2c1-e76f91c0dcb5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:05:41 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:41.468 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:41 compute-0 podman[255569]: 2025-11-25 11:05:41.958709107 +0000 UTC m=+0.068306555 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm)
Nov 25 11:05:41 compute-0 podman[255570]: 2025-11-25 11:05:41.972607527 +0000 UTC m=+0.077257733 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.665 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.814 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.863 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.864 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.865 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.865 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.865 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.896 189385 WARNING nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] While synchronizing instance power states, found 3 instances in the database and 1 instances on the hypervisor.
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.897 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.897 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.898 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 74072f60-1884-462d-9a69-28925a67978d _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.898 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.899 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.901 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.902 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.903 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.903 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:05:42 compute-0 nova_compute[189381]: 2025-11-25 11:05:42.957 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:43 compute-0 nova_compute[189381]: 2025-11-25 11:05:43.060 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:05:43 compute-0 nova_compute[189381]: 2025-11-25 11:05:43.537 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:43 compute-0 nova_compute[189381]: 2025-11-25 11:05:43.939 189385 DEBUG nova.network.neutron [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Successfully created port: 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:05:44 compute-0 podman[255606]: 2025-11-25 11:05:44.802775791 +0000 UTC m=+0.111241950 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.113 189385 DEBUG nova.network.neutron [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Successfully updated port: e66646b4-49f7-478f-a2c1-e76f91c0dcb5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.196 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.197 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquired lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.197 189385 DEBUG nova.network.neutron [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.283 189385 DEBUG nova.compute.manager [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-changed-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.283 189385 DEBUG nova.compute.manager [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Refreshing instance network info cache due to event network-changed-e66646b4-49f7-478f-a2c1-e76f91c0dcb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.283 189385 DEBUG oslo_concurrency.lockutils [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:05:45 compute-0 nova_compute[189381]: 2025-11-25 11:05:45.825 189385 DEBUG nova.network.neutron [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.375 189385 DEBUG nova.network.neutron [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Successfully updated port: 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.473 189385 DEBUG nova.compute.manager [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-changed-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.474 189385 DEBUG nova.compute.manager [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Refreshing instance network info cache due to event network-changed-086b3bc6-2c46-45d0-bc3e-f02fd307fe64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.474 189385 DEBUG oslo_concurrency.lockutils [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.475 189385 DEBUG oslo_concurrency.lockutils [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.475 189385 DEBUG nova.network.neutron [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Refreshing network info cache for port 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.560 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.693 189385 DEBUG nova.network.neutron [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.716 189385 DEBUG nova.network.neutron [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updating instance_info_cache with network_info: [{"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.817 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.843 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Releasing lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.844 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Instance network_info: |[{"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.844 189385 DEBUG oslo_concurrency.lockutils [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.845 189385 DEBUG nova.network.neutron [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Refreshing network info cache for port e66646b4-49f7-478f-a2c1-e76f91c0dcb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.848 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Start _get_guest_xml network_info=[{"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.855 189385 WARNING nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.864 189385 DEBUG nova.virt.libvirt.host [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.865 189385 DEBUG nova.virt.libvirt.host [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.870 189385 DEBUG nova.virt.libvirt.host [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.870 189385 DEBUG nova.virt.libvirt.host [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.871 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.871 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.872 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.872 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.872 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.874 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.874 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.874 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.875 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.875 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.876 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.876 189385 DEBUG nova.virt.hardware [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.879 189385 DEBUG nova.virt.libvirt.vif [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:05:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-401290240',display_name='tempest-TestNetworkBasicOps-server-401290240',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-401290240',id=12,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHA8R4q6qPFU+ALdVzgKo4U9D54rhiMYhyFh1DfoGFij9UC3wSOk8pBEA8MgYqf5zaKmFTI58V1qGOYP7Zgp5d4I8du77yh6rO6+SF28X0uZmieYLZNtgoLf/lManZdFug==',key_name='tempest-TestNetworkBasicOps-1314646098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89069d3ee96a4fd493232b094a94877d',ramdisk_id='',reservation_id='r-gnoseubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-448137458',owner_user_name='tempest-TestNetworkBasicOps-448137458-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:05:29Z,user_data=None,user_id='97d307f20103434babe2431661f5bbdb',uuid=b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.880 189385 DEBUG nova.network.os_vif_util [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converting VIF {"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.881 189385 DEBUG nova.network.os_vif_util [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ce:5c,bridge_name='br-int',has_traffic_filtering=True,id=e66646b4-49f7-478f-a2c1-e76f91c0dcb5,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66646b4-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.882 189385 DEBUG nova.objects.instance [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lazy-loading 'pci_devices' on Instance uuid b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.899 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <uuid>b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f</uuid>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <name>instance-0000000c</name>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <nova:name>tempest-TestNetworkBasicOps-server-401290240</nova:name>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:05:47</nova:creationTime>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:user uuid="97d307f20103434babe2431661f5bbdb">tempest-TestNetworkBasicOps-448137458-project-member</nova:user>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:project uuid="89069d3ee96a4fd493232b094a94877d">tempest-TestNetworkBasicOps-448137458</nova:project>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         <nova:port uuid="e66646b4-49f7-478f-a2c1-e76f91c0dcb5">
Nov 25 11:05:47 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <system>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <entry name="serial">b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f</entry>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <entry name="uuid">b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f</entry>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </system>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <os>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   </os>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <features>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   </features>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.config"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:05:ce:5c"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <target dev="tape66646b4-49"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/console.log" append="off"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <video>
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </video>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:05:47 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:05:47 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:05:47 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:05:47 compute-0 nova_compute[189381]: </domain>
Nov 25 11:05:47 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.900 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Preparing to wait for external event network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.901 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.901 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.901 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.902 189385 DEBUG nova.virt.libvirt.vif [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:05:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-401290240',display_name='tempest-TestNetworkBasicOps-server-401290240',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-401290240',id=12,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHA8R4q6qPFU+ALdVzgKo4U9D54rhiMYhyFh1DfoGFij9UC3wSOk8pBEA8MgYqf5zaKmFTI58V1qGOYP7Zgp5d4I8du77yh6rO6+SF28X0uZmieYLZNtgoLf/lManZdFug==',key_name='tempest-TestNetworkBasicOps-1314646098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89069d3ee96a4fd493232b094a94877d',ramdisk_id='',reservation_id='r-gnoseubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-448137458',owner_user_name='tempest-TestNetworkBasicOps-448137458-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:05:29Z,user_data=None,user_id='97d307f20103434babe2431661f5bbdb',uuid=b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.902 189385 DEBUG nova.network.os_vif_util [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converting VIF {"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.903 189385 DEBUG nova.network.os_vif_util [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:ce:5c,bridge_name='br-int',has_traffic_filtering=True,id=e66646b4-49f7-478f-a2c1-e76f91c0dcb5,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66646b4-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.903 189385 DEBUG os_vif [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ce:5c,bridge_name='br-int',has_traffic_filtering=True,id=e66646b4-49f7-478f-a2c1-e76f91c0dcb5,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66646b4-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.904 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.904 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.905 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.908 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.910 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape66646b4-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.911 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape66646b4-49, col_values=(('external_ids', {'iface-id': 'e66646b4-49f7-478f-a2c1-e76f91c0dcb5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:ce:5c', 'vm-uuid': 'b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:47 compute-0 NetworkManager[56317]: <info>  [1764068747.9136] manager: (tape66646b4-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.916 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.921 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.923 189385 INFO os_vif [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:ce:5c,bridge_name='br-int',has_traffic_filtering=True,id=e66646b4-49f7-478f-a2c1-e76f91c0dcb5,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66646b4-49')
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.970 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.971 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.971 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] No VIF found with MAC fa:16:3e:05:ce:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:05:47 compute-0 nova_compute[189381]: 2025-11-25 11:05:47.972 189385 INFO nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Using config drive
Nov 25 11:05:48 compute-0 nova_compute[189381]: 2025-11-25 11:05:48.540 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.228 189385 INFO nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Creating config drive at /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.config
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.232 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgfsotdx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.359 189385 DEBUG oslo_concurrency.processutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfgfsotdx" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:49 compute-0 kernel: tape66646b4-49: entered promiscuous mode
Nov 25 11:05:49 compute-0 NetworkManager[56317]: <info>  [1764068749.4227] manager: (tape66646b4-49): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Nov 25 11:05:49 compute-0 ovn_controller[97779]: 2025-11-25T11:05:49Z|00147|binding|INFO|Claiming lport e66646b4-49f7-478f-a2c1-e76f91c0dcb5 for this chassis.
Nov 25 11:05:49 compute-0 ovn_controller[97779]: 2025-11-25T11:05:49Z|00148|binding|INFO|e66646b4-49f7-478f-a2c1-e76f91c0dcb5: Claiming fa:16:3e:05:ce:5c 10.100.0.5
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.427 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.431 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:49 compute-0 systemd-udevd[255644]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.490 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:49 compute-0 NetworkManager[56317]: <info>  [1764068749.4959] device (tape66646b4-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:05:49 compute-0 ovn_controller[97779]: 2025-11-25T11:05:49Z|00149|binding|INFO|Setting lport e66646b4-49f7-478f-a2c1-e76f91c0dcb5 ovn-installed in OVS
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.497 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:49 compute-0 NetworkManager[56317]: <info>  [1764068749.5036] device (tape66646b4-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:05:49 compute-0 systemd-machined[155706]: New machine qemu-13-instance-0000000c.
Nov 25 11:05:49 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Nov 25 11:05:49 compute-0 ovn_controller[97779]: 2025-11-25T11:05:49Z|00150|binding|INFO|Setting lport e66646b4-49f7-478f-a2c1-e76f91c0dcb5 up in Southbound
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.669 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ce:5c 10.100.0.5'], port_security=['fa:16:3e:05:ce:5c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89069d3ee96a4fd493232b094a94877d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e62d1308-edba-4797-954c-6555434a8671', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=feead41c-bd30-4d7d-b182-8bed9968ffc7, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=e66646b4-49f7-478f-a2c1-e76f91c0dcb5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.670 106634 INFO neutron.agent.ovn.metadata.agent [-] Port e66646b4-49f7-478f-a2c1-e76f91c0dcb5 in datapath a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 bound to our chassis
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.671 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.682 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[66914234-1c8a-4bf1-a78a-6a6709ed8fee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.683 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa6f834aa-d1 in ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.685 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa6f834aa-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.685 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[425eb55a-483e-42c9-b246-b8e334605cbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.687 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0c155273-5dc1-4358-b027-d83e10741bb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.707 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2f573a-d316-437e-b385-057a21f4dae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.740 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[f82b0367-a566-4a3e-911c-2bf2b4750776]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.775 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[938932be-019d-40d5-a660-6c0397efebec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.789 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[44202f7d-61f4-41d1-9dbb-f3e38eee96af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 NetworkManager[56317]: <info>  [1764068749.7914] manager: (tapa6f834aa-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.836 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[e65a298d-ef12-444e-90db-04dbd2c76058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.839 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ef3822-7a65-4fdd-8cfd-d4c9ed9f0aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 NetworkManager[56317]: <info>  [1764068749.8691] device (tapa6f834aa-d0): carrier: link connected
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.874 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[44fd0445-7f68-46bc-96cd-1a33e0bd482c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.891 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[059fbf99-1068-4af6-9d00-e70eafcf23eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6f834aa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:f4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565744, 'reachable_time': 30685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255689, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.910 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[062b2cc5-8967-4131-8d97-e836ab694b05]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:f474'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565744, 'tstamp': 565744}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255690, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.929 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[fa434614-61d5-4631-a267-a914944a56ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6f834aa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:f4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565744, 'reachable_time': 30685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255692, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.950 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068749.9487743, b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.950 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] VM Started (Lifecycle Event)
Nov 25 11:05:49 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:49.959 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2e00ab2a-eac6-4d17-b420-387bcd4a0ce3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.976 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.981 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068749.9488919, b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:05:49 compute-0 nova_compute[189381]: 2025-11-25 11:05:49.981 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] VM Paused (Lifecycle Event)
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.001 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.003 189385 DEBUG nova.network.neutron [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.009 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.019 189385 DEBUG oslo_concurrency.lockutils [req-b2694ea6-1ea3-484a-9685-e1b02cc001ca req-80222210-9627-419e-b7be-9e564fa573eb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.021 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquired lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.021 189385 DEBUG nova.network.neutron [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.023 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf22ad4-119b-49b6-876f-ff8f4aed447d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.024 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.025 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6f834aa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.025 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.026 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6f834aa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:50 compute-0 kernel: tapa6f834aa-d0: entered promiscuous mode
Nov 25 11:05:50 compute-0 NetworkManager[56317]: <info>  [1764068750.0297] manager: (tapa6f834aa-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.032 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa6f834aa-d0, col_values=(('external_ids', {'iface-id': '702441f8-9440-4a38-a0f0-225d972b0155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:50 compute-0 ovn_controller[97779]: 2025-11-25T11:05:50Z|00151|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.037 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.041 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.042 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[18abaca0-c3db-47dd-ada0-032576bbabce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.043 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2.pid.haproxy
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:05:50 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:50.043 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'env', 'PROCESS_TAG=haproxy-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:05:50 compute-0 nova_compute[189381]: 2025-11-25 11:05:50.054 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:50 compute-0 podman[255724]: 2025-11-25 11:05:50.488325874 +0000 UTC m=+0.086384104 container create 691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:05:50 compute-0 systemd[1]: Started libpod-conmon-691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523.scope.
Nov 25 11:05:50 compute-0 podman[255724]: 2025-11-25 11:05:50.4557195 +0000 UTC m=+0.053777750 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:05:50 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:05:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1370c934ebef9537e2b36ccfe10491be88aa1b8b19308df9823b5d7adfeb7cb5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:50 compute-0 podman[255724]: 2025-11-25 11:05:50.603631496 +0000 UTC m=+0.201689746 container init 691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:05:50 compute-0 podman[255724]: 2025-11-25 11:05:50.611012649 +0000 UTC m=+0.209070879 container start 691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:05:50 compute-0 neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2[255739]: [NOTICE]   (255743) : New worker (255745) forked
Nov 25 11:05:50 compute-0 neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2[255739]: [NOTICE]   (255743) : Loading success.
Nov 25 11:05:51 compute-0 podman[255754]: 2025-11-25 11:05:51.971760499 +0000 UTC m=+0.076986672 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:05:51 compute-0 nova_compute[189381]: 2025-11-25 11:05:51.995 189385 DEBUG nova.network.neutron [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:05:52 compute-0 nova_compute[189381]: 2025-11-25 11:05:52.914 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:53 compute-0 nova_compute[189381]: 2025-11-25 11:05:53.543 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.770 189385 DEBUG nova.network.neutron [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updated VIF entry in instance network info cache for port e66646b4-49f7-478f-a2c1-e76f91c0dcb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.771 189385 DEBUG nova.network.neutron [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updating instance_info_cache with network_info: [{"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.783 189385 DEBUG oslo_concurrency.lockutils [req-db422d45-174b-4881-ba28-96f1adabe59b req-95b1099b-a2e7-4e0d-bead-02b389481024 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.833 189385 DEBUG nova.network.neutron [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Updating instance_info_cache with network_info: [{"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.872 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Releasing lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.873 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Instance network_info: |[{"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.875 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Start _get_guest_xml network_info=[{"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.884 189385 WARNING nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.892 189385 DEBUG nova.virt.libvirt.host [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.893 189385 DEBUG nova.virt.libvirt.host [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.897 189385 DEBUG nova.virt.libvirt.host [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.898 189385 DEBUG nova.virt.libvirt.host [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.898 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.898 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.899 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.899 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.899 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.899 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.900 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.900 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.902 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.902 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.902 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.902 189385 DEBUG nova.virt.hardware [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.906 189385 DEBUG nova.virt.libvirt.vif [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:05:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-671773331',display_name='tempest-TestServerBasicOps-server-671773331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-671773331',id=13,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCm6h7gMXH3DYHNr5rdS+vbtvUVOkFXXJQVtLcM0GmrbK0AYY4Se5XWSLFwYlIxzP88Cl3TVscoHCphvEWXJNl+yg8pdZ5IvlZoWt0z45Iz6VKseG1WovCCMsAylx+LTkg==',key_name='tempest-TestServerBasicOps-1049920664',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6daca89a9f274580a80130a94ea91f45',ramdisk_id='',reservation_id='r-cil2kb8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-382705340',owner_user_name='tempest-TestServerBasicOps-382705340-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:05:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f4a560d6494ec3aa4e1a291f7917c1',uuid=74072f60-1884-462d-9a69-28925a67978d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.907 189385 DEBUG nova.network.os_vif_util [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Converting VIF {"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.907 189385 DEBUG nova.network.os_vif_util [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:ef:83,bridge_name='br-int',has_traffic_filtering=True,id=086b3bc6-2c46-45d0-bc3e-f02fd307fe64,network=Network(5a488783-81eb-4a79-a4fc-78987bdf65c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086b3bc6-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.908 189385 DEBUG nova.objects.instance [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74072f60-1884-462d-9a69-28925a67978d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.920 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <uuid>74072f60-1884-462d-9a69-28925a67978d</uuid>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <name>instance-0000000d</name>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <nova:name>tempest-TestServerBasicOps-server-671773331</nova:name>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:05:54</nova:creationTime>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:user uuid="09f4a560d6494ec3aa4e1a291f7917c1">tempest-TestServerBasicOps-382705340-project-member</nova:user>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:project uuid="6daca89a9f274580a80130a94ea91f45">tempest-TestServerBasicOps-382705340</nova:project>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         <nova:port uuid="086b3bc6-2c46-45d0-bc3e-f02fd307fe64">
Nov 25 11:05:54 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <system>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <entry name="serial">74072f60-1884-462d-9a69-28925a67978d</entry>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <entry name="uuid">74072f60-1884-462d-9a69-28925a67978d</entry>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </system>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <os>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   </os>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <features>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   </features>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk.config"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:5a:ef:83"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <target dev="tap086b3bc6-2c"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/console.log" append="off"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <video>
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </video>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:05:54 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:05:54 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:05:54 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:05:54 compute-0 nova_compute[189381]: </domain>
Nov 25 11:05:54 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.920 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Preparing to wait for external event network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.920 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.921 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.921 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.922 189385 DEBUG nova.virt.libvirt.vif [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:05:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-671773331',display_name='tempest-TestServerBasicOps-server-671773331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-671773331',id=13,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCm6h7gMXH3DYHNr5rdS+vbtvUVOkFXXJQVtLcM0GmrbK0AYY4Se5XWSLFwYlIxzP88Cl3TVscoHCphvEWXJNl+yg8pdZ5IvlZoWt0z45Iz6VKseG1WovCCMsAylx+LTkg==',key_name='tempest-TestServerBasicOps-1049920664',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6daca89a9f274580a80130a94ea91f45',ramdisk_id='',reservation_id='r-cil2kb8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-382705340',owner_user_name='tempest-TestServerBasicOps-382705340-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:05:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f4a560d6494ec3aa4e1a291f7917c1',uuid=74072f60-1884-462d-9a69-28925a67978d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.922 189385 DEBUG nova.network.os_vif_util [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Converting VIF {"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.922 189385 DEBUG nova.network.os_vif_util [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:ef:83,bridge_name='br-int',has_traffic_filtering=True,id=086b3bc6-2c46-45d0-bc3e-f02fd307fe64,network=Network(5a488783-81eb-4a79-a4fc-78987bdf65c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086b3bc6-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.923 189385 DEBUG os_vif [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:ef:83,bridge_name='br-int',has_traffic_filtering=True,id=086b3bc6-2c46-45d0-bc3e-f02fd307fe64,network=Network(5a488783-81eb-4a79-a4fc-78987bdf65c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086b3bc6-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.923 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.924 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.924 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.926 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.927 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap086b3bc6-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.927 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap086b3bc6-2c, col_values=(('external_ids', {'iface-id': '086b3bc6-2c46-45d0-bc3e-f02fd307fe64', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:ef:83', 'vm-uuid': '74072f60-1884-462d-9a69-28925a67978d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.928 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.931 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:05:54 compute-0 NetworkManager[56317]: <info>  [1764068754.9350] manager: (tap086b3bc6-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.937 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:54 compute-0 nova_compute[189381]: 2025-11-25 11:05:54.938 189385 INFO os_vif [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:ef:83,bridge_name='br-int',has_traffic_filtering=True,id=086b3bc6-2c46-45d0-bc3e-f02fd307fe64,network=Network(5a488783-81eb-4a79-a4fc-78987bdf65c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086b3bc6-2c')
Nov 25 11:05:54 compute-0 podman[255771]: 2025-11-25 11:05:54.961339876 +0000 UTC m=+0.069101023 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7)
Nov 25 11:05:54 compute-0 podman[255772]: 2025-11-25 11:05:54.986271979 +0000 UTC m=+0.087004932 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.000 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.001 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.001 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] No VIF found with MAC fa:16:3e:5a:ef:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.001 189385 INFO nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Using config drive
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.794 189385 INFO nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Creating config drive at /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk.config
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.801 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2hlrdrfh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.928 189385 DEBUG oslo_concurrency.processutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2hlrdrfh" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:05:55 compute-0 kernel: tap086b3bc6-2c: entered promiscuous mode
Nov 25 11:05:55 compute-0 NetworkManager[56317]: <info>  [1764068755.9855] manager: (tap086b3bc6-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 25 11:05:55 compute-0 nova_compute[189381]: 2025-11-25 11:05:55.990 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:55 compute-0 ovn_controller[97779]: 2025-11-25T11:05:55Z|00152|binding|INFO|Claiming lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 for this chassis.
Nov 25 11:05:55 compute-0 ovn_controller[97779]: 2025-11-25T11:05:55Z|00153|binding|INFO|086b3bc6-2c46-45d0-bc3e-f02fd307fe64: Claiming fa:16:3e:5a:ef:83 10.100.0.7
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.006 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:ef:83 10.100.0.7'], port_security=['fa:16:3e:5a:ef:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '74072f60-1884-462d-9a69-28925a67978d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6daca89a9f274580a80130a94ea91f45', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8ed7556-296c-4f8a-8d14-b7db687fcc5d d6b174b5-3e6d-4fce-b47c-4c0b0e953e7c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b5f4bb0-b48a-4dd3-b95b-544c18545f75, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=086b3bc6-2c46-45d0-bc3e-f02fd307fe64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.007 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 in datapath 5a488783-81eb-4a79-a4fc-78987bdf65c9 bound to our chassis
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.009 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5a488783-81eb-4a79-a4fc-78987bdf65c9
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.025 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[b3af6aa8-a9b0-4b47-aee9-fc0859fc0b2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.026 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5a488783-81 in ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 25 11:05:56 compute-0 systemd-udevd[255836]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.028 239582 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5a488783-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.028 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7db33eb8-0d67-4c32-946e-ecb83c6e41b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.030 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[569c1988-4ad8-46ee-a295-50fa11691b7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 systemd-machined[155706]: New machine qemu-14-instance-0000000d.
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.046 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.049 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[64d3a5db-b815-4fde-86b1-f807e33ee432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 NetworkManager[56317]: <info>  [1764068756.0533] device (tap086b3bc6-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:05:56 compute-0 ovn_controller[97779]: 2025-11-25T11:05:56Z|00154|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:05:56 compute-0 ovn_controller[97779]: 2025-11-25T11:05:56Z|00155|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:05:56 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Nov 25 11:05:56 compute-0 NetworkManager[56317]: <info>  [1764068756.0586] device (tap086b3bc6-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:05:56 compute-0 ovn_controller[97779]: 2025-11-25T11:05:56Z|00156|binding|INFO|Setting lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 up in Southbound
Nov 25 11:05:56 compute-0 ovn_controller[97779]: 2025-11-25T11:05:56Z|00157|binding|INFO|Setting lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 ovn-installed in OVS
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.066 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.069 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[fae69767-e01f-4fb1-ae8a-6bdd43f51dfa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.104 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[43f09fec-d8e9-49e5-b630-0a2682c94218]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.112 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6dedc3b8-20b1-49b2-93de-28ded6b476d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 NetworkManager[56317]: <info>  [1764068756.1143] manager: (tap5a488783-80): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.149 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[168da857-53ae-43f9-a700-3e349a8bbed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.154 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[496b32b2-e13f-42c6-9ec9-cd1641c074a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 NetworkManager[56317]: <info>  [1764068756.1847] device (tap5a488783-80): carrier: link connected
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.193 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[d37bc3ee-d862-4b17-92a7-e981ea2ca7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.214 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[00ffcab9-8fef-4cb8-b968-f6610d768257]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a488783-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8a:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566376, 'reachable_time': 23774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255869, 'error': None, 'target': 'ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.236 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[593f8728-a88f-4ec8-86cb-87975cdd59d1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:8aef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566376, 'tstamp': 566376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255870, 'error': None, 'target': 'ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.253 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[ad9d0cb3-657e-4486-ba1d-14b5e4a4d177]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a488783-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:8a:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566376, 'reachable_time': 23774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255871, 'error': None, 'target': 'ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.284 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[00284fca-dd67-48bd-a882-918735d1ea51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.349 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[54534b35-594d-4656-bcdb-04a936c835e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.351 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a488783-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.352 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.353 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a488783-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.355 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:56 compute-0 kernel: tap5a488783-80: entered promiscuous mode
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.357 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:56 compute-0 NetworkManager[56317]: <info>  [1764068756.3607] manager: (tap5a488783-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.367 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5a488783-80, col_values=(('external_ids', {'iface-id': 'b24d50bb-05f2-41c3-b57f-00165f8fc524'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.369 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:56 compute-0 ovn_controller[97779]: 2025-11-25T11:05:56Z|00158|binding|INFO|Releasing lport b24d50bb-05f2-41c3-b57f-00165f8fc524 from this chassis (sb_readonly=0)
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.370 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.371 106634 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5a488783-81eb-4a79-a4fc-78987bdf65c9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5a488783-81eb-4a79-a4fc-78987bdf65c9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.373 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[002bf80b-41d8-4a5d-9626-bea878e95107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.375 106634 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: global
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     log         /dev/log local0 debug
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     log-tag     haproxy-metadata-proxy-5a488783-81eb-4a79-a4fc-78987bdf65c9
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     user        root
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     group       root
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     maxconn     1024
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     pidfile     /var/lib/neutron/external/pids/5a488783-81eb-4a79-a4fc-78987bdf65c9.pid.haproxy
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     daemon
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: defaults
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     log global
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     mode http
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     option httplog
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     option dontlognull
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     option http-server-close
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     option forwardfor
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     retries                 3
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     timeout http-request    30s
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     timeout connect         30s
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     timeout client          32s
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     timeout server          32s
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     timeout http-keep-alive 30s
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: listen listener
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     bind 169.254.169.254:80
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     server metadata /var/lib/neutron/metadata_proxy
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:     http-request add-header X-OVN-Network-ID 5a488783-81eb-4a79-a4fc-78987bdf65c9
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 25 11:05:56 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:05:56.378 106634 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'env', 'PROCESS_TAG=haproxy-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5a488783-81eb-4a79-a4fc-78987bdf65c9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.387 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.583 189385 DEBUG nova.compute.manager [req-15bad7de-1970-4cca-9642-d7418de16961 req-a834a40d-b319-4fc5-b4d7-968564ebf130 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.584 189385 DEBUG oslo_concurrency.lockutils [req-15bad7de-1970-4cca-9642-d7418de16961 req-a834a40d-b319-4fc5-b4d7-968564ebf130 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.584 189385 DEBUG oslo_concurrency.lockutils [req-15bad7de-1970-4cca-9642-d7418de16961 req-a834a40d-b319-4fc5-b4d7-968564ebf130 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.584 189385 DEBUG oslo_concurrency.lockutils [req-15bad7de-1970-4cca-9642-d7418de16961 req-a834a40d-b319-4fc5-b4d7-968564ebf130 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.584 189385 DEBUG nova.compute.manager [req-15bad7de-1970-4cca-9642-d7418de16961 req-a834a40d-b319-4fc5-b4d7-968564ebf130 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Processing event network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.586 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068756.5859463, 74072f60-1884-462d-9a69-28925a67978d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.586 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] VM Started (Lifecycle Event)
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.589 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.593 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.599 189385 INFO nova.virt.libvirt.driver [-] [instance: 74072f60-1884-462d-9a69-28925a67978d] Instance spawned successfully.
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.599 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.618 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.626 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.638 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.638 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.639 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.639 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.639 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.640 189385 DEBUG nova.virt.libvirt.driver [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.662 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.663 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068756.586061, 74072f60-1884-462d-9a69-28925a67978d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.664 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] VM Paused (Lifecycle Event)
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.685 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.693 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068756.5919337, 74072f60-1884-462d-9a69-28925a67978d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.693 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] VM Resumed (Lifecycle Event)
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.716 189385 INFO nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Took 22.14 seconds to spawn the instance on the hypervisor.
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.716 189385 DEBUG nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.717 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.728 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.760 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:05:56 compute-0 podman[255910]: 2025-11-25 11:05:56.791801588 +0000 UTC m=+0.062297946 container create cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.798 189385 INFO nova.compute.manager [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Took 25.69 seconds to build instance.
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.832 189385 DEBUG oslo_concurrency.lockutils [None req-aae09e7b-e65b-4ad1-ad5a-fb00192bb744 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 26.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.833 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "74072f60-1884-462d-9a69-28925a67978d" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.833 189385 INFO nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:05:56 compute-0 nova_compute[189381]: 2025-11-25 11:05:56.834 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "74072f60-1884-462d-9a69-28925a67978d" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:56 compute-0 podman[255910]: 2025-11-25 11:05:56.757041281 +0000 UTC m=+0.027537659 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 25 11:05:56 compute-0 systemd[1]: Started libpod-conmon-cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a.scope.
Nov 25 11:05:56 compute-0 systemd[1]: Started libcrun container.
Nov 25 11:05:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e8ef83a0850ec62cee7872b5a1f79490aa7d80bb176fc0aca15a6b203f6eb22/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 25 11:05:56 compute-0 podman[255910]: 2025-11-25 11:05:56.941419493 +0000 UTC m=+0.211915871 container init cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:05:56 compute-0 podman[255910]: 2025-11-25 11:05:56.949875418 +0000 UTC m=+0.220371776 container start cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118)
Nov 25 11:05:56 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [NOTICE]   (255928) : New worker (255930) forked
Nov 25 11:05:56 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [NOTICE]   (255928) : Loading success.
Nov 25 11:05:58 compute-0 NetworkManager[56317]: <info>  [1764068758.1610] manager: (patch-br-int-to-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 25 11:05:58 compute-0 NetworkManager[56317]: <info>  [1764068758.1646] manager: (patch-provnet-c6710824-030e-46d7-bb7a-3dd11e74ee72-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.159 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.380 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:58 compute-0 ovn_controller[97779]: 2025-11-25T11:05:58Z|00159|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:05:58 compute-0 ovn_controller[97779]: 2025-11-25T11:05:58Z|00160|binding|INFO|Releasing lport b24d50bb-05f2-41c3-b57f-00165f8fc524 from this chassis (sb_readonly=0)
Nov 25 11:05:58 compute-0 ovn_controller[97779]: 2025-11-25T11:05:58Z|00161|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.416 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.545 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.744 189385 DEBUG nova.compute.manager [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.745 189385 DEBUG oslo_concurrency.lockutils [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.745 189385 DEBUG oslo_concurrency.lockutils [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.746 189385 DEBUG oslo_concurrency.lockutils [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.747 189385 DEBUG nova.compute.manager [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] No waiting events found dispatching network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.747 189385 WARNING nova.compute.manager [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received unexpected event network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 for instance with vm_state active and task_state None.
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.747 189385 DEBUG nova.compute.manager [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-changed-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.748 189385 DEBUG nova.compute.manager [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Refreshing instance network info cache due to event network-changed-086b3bc6-2c46-45d0-bc3e-f02fd307fe64. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.748 189385 DEBUG oslo_concurrency.lockutils [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.749 189385 DEBUG oslo_concurrency.lockutils [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:05:58 compute-0 nova_compute[189381]: 2025-11-25 11:05:58.749 189385 DEBUG nova.network.neutron [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Refreshing network info cache for port 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:05:59 compute-0 podman[203557]: time="2025-11-25T11:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:05:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Nov 25 11:05:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5726 "" "Go-http-client/1.1"
Nov 25 11:05:59 compute-0 nova_compute[189381]: 2025-11-25 11:05:59.929 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:05:59 compute-0 podman[255941]: 2025-11-25 11:05:59.990304909 +0000 UTC m=+0.098471194 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 25 11:05:59 compute-0 podman[255940]: 2025-11-25 11:05:59.994082489 +0000 UTC m=+0.106839387 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 25 11:06:00 compute-0 nova_compute[189381]: 2025-11-25 11:06:00.759 189385 DEBUG nova.network.neutron [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Updated VIF entry in instance network info cache for port 086b3bc6-2c46-45d0-bc3e-f02fd307fe64. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:06:00 compute-0 nova_compute[189381]: 2025-11-25 11:06:00.759 189385 DEBUG nova.network.neutron [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Updating instance_info_cache with network_info: [{"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:06:00 compute-0 nova_compute[189381]: 2025-11-25 11:06:00.815 189385 DEBUG oslo_concurrency.lockutils [req-802c8ad6-7601-4c32-8d5c-7a2bb0eb8012 req-3a690b47-0689-427f-8911-09681eb41ba0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-74072f60-1884-462d-9a69-28925a67978d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:06:01 compute-0 openstack_network_exporter[205722]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:06:01 compute-0 openstack_network_exporter[205722]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:06:01 compute-0 openstack_network_exporter[205722]: ERROR   11:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:06:01 compute-0 openstack_network_exporter[205722]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:06:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:06:01 compute-0 openstack_network_exporter[205722]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:06:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.548 189385 DEBUG nova.compute.manager [req-fb337a23-d848-447a-88df-2a2ea52bc34a req-d0ff8269-ab26-4669-a5bc-62ef96be2a0a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.548 189385 DEBUG oslo_concurrency.lockutils [req-fb337a23-d848-447a-88df-2a2ea52bc34a req-d0ff8269-ab26-4669-a5bc-62ef96be2a0a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.548 189385 DEBUG oslo_concurrency.lockutils [req-fb337a23-d848-447a-88df-2a2ea52bc34a req-d0ff8269-ab26-4669-a5bc-62ef96be2a0a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.549 189385 DEBUG oslo_concurrency.lockutils [req-fb337a23-d848-447a-88df-2a2ea52bc34a req-d0ff8269-ab26-4669-a5bc-62ef96be2a0a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.549 189385 DEBUG nova.compute.manager [req-fb337a23-d848-447a-88df-2a2ea52bc34a req-d0ff8269-ab26-4669-a5bc-62ef96be2a0a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Processing event network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.550 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Instance event wait completed in 11 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.567 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.569 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068761.5689242, b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.570 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] VM Resumed (Lifecycle Event)
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.576 189385 INFO nova.virt.libvirt.driver [-] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Instance spawned successfully.
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.577 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.589 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.596 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.599 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.599 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.600 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.600 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.601 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.601 189385 DEBUG nova.virt.libvirt.driver [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:06:01 compute-0 nova_compute[189381]: 2025-11-25 11:06:01.633 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:06:02 compute-0 nova_compute[189381]: 2025-11-25 11:06:02.314 189385 INFO nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Took 32.50 seconds to spawn the instance on the hypervisor.
Nov 25 11:06:02 compute-0 nova_compute[189381]: 2025-11-25 11:06:02.315 189385 DEBUG nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:06:02 compute-0 nova_compute[189381]: 2025-11-25 11:06:02.407 189385 INFO nova.compute.manager [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Took 33.36 seconds to build instance.
Nov 25 11:06:02 compute-0 nova_compute[189381]: 2025-11-25 11:06:02.519 189385 DEBUG oslo_concurrency.lockutils [None req-086ec2af-8b61-4d55-8e3b-6101b78d65a7 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:06:02 compute-0 nova_compute[189381]: 2025-11-25 11:06:02.520 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 19.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:02 compute-0 nova_compute[189381]: 2025-11-25 11:06:02.520 189385 INFO nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:06:02 compute-0 nova_compute[189381]: 2025-11-25 11:06:02.521 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:06:03 compute-0 nova_compute[189381]: 2025-11-25 11:06:03.547 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:03 compute-0 nova_compute[189381]: 2025-11-25 11:06:03.828 189385 DEBUG nova.compute.manager [req-a94eb37e-b7aa-41b0-b056-fe02ed5983fc req-451fb32b-eece-44b4-b883-e2b3051f3fb7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:06:03 compute-0 nova_compute[189381]: 2025-11-25 11:06:03.829 189385 DEBUG oslo_concurrency.lockutils [req-a94eb37e-b7aa-41b0-b056-fe02ed5983fc req-451fb32b-eece-44b4-b883-e2b3051f3fb7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:06:03 compute-0 nova_compute[189381]: 2025-11-25 11:06:03.829 189385 DEBUG oslo_concurrency.lockutils [req-a94eb37e-b7aa-41b0-b056-fe02ed5983fc req-451fb32b-eece-44b4-b883-e2b3051f3fb7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:03 compute-0 nova_compute[189381]: 2025-11-25 11:06:03.829 189385 DEBUG oslo_concurrency.lockutils [req-a94eb37e-b7aa-41b0-b056-fe02ed5983fc req-451fb32b-eece-44b4-b883-e2b3051f3fb7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:06:03 compute-0 nova_compute[189381]: 2025-11-25 11:06:03.830 189385 DEBUG nova.compute.manager [req-a94eb37e-b7aa-41b0-b056-fe02ed5983fc req-451fb32b-eece-44b4-b883-e2b3051f3fb7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] No waiting events found dispatching network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:06:03 compute-0 nova_compute[189381]: 2025-11-25 11:06:03.830 189385 WARNING nova.compute.manager [req-a94eb37e-b7aa-41b0-b056-fe02ed5983fc req-451fb32b-eece-44b4-b883-e2b3051f3fb7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received unexpected event network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 for instance with vm_state active and task_state None.
Nov 25 11:06:03 compute-0 podman[255986]: 2025-11-25 11:06:03.952515091 +0000 UTC m=+0.065687485 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 11:06:04 compute-0 nova_compute[189381]: 2025-11-25 11:06:04.932 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:07 compute-0 ovn_controller[97779]: 2025-11-25T11:06:07Z|00162|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:06:07 compute-0 ovn_controller[97779]: 2025-11-25T11:06:07Z|00163|binding|INFO|Releasing lport b24d50bb-05f2-41c3-b57f-00165f8fc524 from this chassis (sb_readonly=0)
Nov 25 11:06:07 compute-0 ovn_controller[97779]: 2025-11-25T11:06:07Z|00164|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:06:07 compute-0 nova_compute[189381]: 2025-11-25 11:06:07.708 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:08 compute-0 nova_compute[189381]: 2025-11-25 11:06:08.549 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:09 compute-0 nova_compute[189381]: 2025-11-25 11:06:09.936 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:11 compute-0 nova_compute[189381]: 2025-11-25 11:06:11.482 189385 DEBUG nova.compute.manager [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-changed-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:06:11 compute-0 nova_compute[189381]: 2025-11-25 11:06:11.483 189385 DEBUG nova.compute.manager [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Refreshing instance network info cache due to event network-changed-e66646b4-49f7-478f-a2c1-e76f91c0dcb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:06:11 compute-0 nova_compute[189381]: 2025-11-25 11:06:11.483 189385 DEBUG oslo_concurrency.lockutils [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:06:11 compute-0 nova_compute[189381]: 2025-11-25 11:06:11.483 189385 DEBUG oslo_concurrency.lockutils [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:06:11 compute-0 nova_compute[189381]: 2025-11-25 11:06:11.483 189385 DEBUG nova.network.neutron [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Refreshing network info cache for port e66646b4-49f7-478f-a2c1-e76f91c0dcb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:06:12 compute-0 podman[256011]: 2025-11-25 11:06:12.954067104 +0000 UTC m=+0.064386816 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:06:12 compute-0 podman[256010]: 2025-11-25 11:06:12.960839441 +0000 UTC m=+0.075611082 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 25 11:06:13 compute-0 nova_compute[189381]: 2025-11-25 11:06:13.550 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:14 compute-0 nova_compute[189381]: 2025-11-25 11:06:14.850 189385 DEBUG nova.network.neutron [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updated VIF entry in instance network info cache for port e66646b4-49f7-478f-a2c1-e76f91c0dcb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:06:14 compute-0 nova_compute[189381]: 2025-11-25 11:06:14.853 189385 DEBUG nova.network.neutron [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updating instance_info_cache with network_info: [{"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:06:14 compute-0 nova_compute[189381]: 2025-11-25 11:06:14.939 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:14 compute-0 podman[256048]: 2025-11-25 11:06:14.952359258 +0000 UTC m=+0.070125694 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 25 11:06:15 compute-0 nova_compute[189381]: 2025-11-25 11:06:15.121 189385 DEBUG oslo_concurrency.lockutils [req-cbfdd70e-2dd9-4a53-9034-406c5538159b req-8f37b5f2-64cc-4177-ae63-4f6edf6544b6 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:06:18 compute-0 nova_compute[189381]: 2025-11-25 11:06:18.552 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:19 compute-0 nova_compute[189381]: 2025-11-25 11:06:19.941 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:22 compute-0 podman[256069]: 2025-11-25 11:06:22.968283622 +0000 UTC m=+0.079274758 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 11:06:23 compute-0 nova_compute[189381]: 2025-11-25 11:06:23.486 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:23 compute-0 nova_compute[189381]: 2025-11-25 11:06:23.555 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:24 compute-0 nova_compute[189381]: 2025-11-25 11:06:24.944 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:25 compute-0 podman[256087]: 2025-11-25 11:06:25.954728239 +0000 UTC m=+0.064666734 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Nov 25 11:06:25 compute-0 podman[256088]: 2025-11-25 11:06:25.99581592 +0000 UTC m=+0.100041580 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:06:28 compute-0 nova_compute[189381]: 2025-11-25 11:06:28.558 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:29 compute-0 ovn_controller[97779]: 2025-11-25T11:06:29Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:ef:83 10.100.0.7
Nov 25 11:06:29 compute-0 ovn_controller[97779]: 2025-11-25T11:06:29Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:ef:83 10.100.0.7
Nov 25 11:06:29 compute-0 nova_compute[189381]: 2025-11-25 11:06:29.184 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:29 compute-0 podman[203557]: time="2025-11-25T11:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:06:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Nov 25 11:06:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5730 "" "Go-http-client/1.1"
Nov 25 11:06:29 compute-0 nova_compute[189381]: 2025-11-25 11:06:29.946 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:30 compute-0 nova_compute[189381]: 2025-11-25 11:06:30.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:30 compute-0 podman[256146]: 2025-11-25 11:06:30.975648898 +0000 UTC m=+0.089124403 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:06:31 compute-0 podman[256145]: 2025-11-25 11:06:31.011345872 +0000 UTC m=+0.129219115 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:06:31 compute-0 nova_compute[189381]: 2025-11-25 11:06:31.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:31 compute-0 openstack_network_exporter[205722]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:06:31 compute-0 openstack_network_exporter[205722]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:06:31 compute-0 openstack_network_exporter[205722]: ERROR   11:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:06:31 compute-0 openstack_network_exporter[205722]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:06:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:06:31 compute-0 openstack_network_exporter[205722]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:06:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:06:33 compute-0 nova_compute[189381]: 2025-11-25 11:06:33.561 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:34 compute-0 nova_compute[189381]: 2025-11-25 11:06:34.949 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:34 compute-0 podman[256199]: 2025-11-25 11:06:34.969687292 +0000 UTC m=+0.086108896 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.062 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.063 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.063 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.064 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.157 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.226 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.233 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.297 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.305 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.366 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.367 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.427 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.434 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.502 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.505 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.568 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.975 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.977 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4839MB free_disk=72.07122421264648GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.977 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:06:35 compute-0 nova_compute[189381]: 2025-11-25 11:06:35.978 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:06:36.072 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:06:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:06:36.073 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:06:36.074 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.176 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.177 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.178 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 74072f60-1884-462d-9a69-28925a67978d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.179 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.180 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.259 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.274 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.300 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:06:36 compute-0 nova_compute[189381]: 2025-11-25 11:06:36.300 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:06:38 compute-0 ovn_controller[97779]: 2025-11-25T11:06:38Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:ce:5c 10.100.0.5
Nov 25 11:06:38 compute-0 ovn_controller[97779]: 2025-11-25T11:06:38Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:ce:5c 10.100.0.5
Nov 25 11:06:38 compute-0 nova_compute[189381]: 2025-11-25 11:06:38.564 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.301 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.302 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.302 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.302 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.848 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.848 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.848 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.849 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:06:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:06:39.883 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:06:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:06:39.885 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.888 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:39 compute-0 nova_compute[189381]: 2025-11-25 11:06:39.953 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:40 compute-0 ovn_controller[97779]: 2025-11-25T11:06:40Z|00165|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:06:40 compute-0 ovn_controller[97779]: 2025-11-25T11:06:40Z|00166|binding|INFO|Releasing lport b24d50bb-05f2-41c3-b57f-00165f8fc524 from this chassis (sb_readonly=0)
Nov 25 11:06:40 compute-0 ovn_controller[97779]: 2025-11-25T11:06:40Z|00167|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:06:40 compute-0 nova_compute[189381]: 2025-11-25 11:06:40.460 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.566 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.819 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.833 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.834 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.835 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.835 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.836 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:43 compute-0 nova_compute[189381]: 2025-11-25 11:06:43.836 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:06:43 compute-0 podman[256250]: 2025-11-25 11:06:43.955327435 +0000 UTC m=+0.068200167 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 25 11:06:43 compute-0 podman[256251]: 2025-11-25 11:06:43.965379076 +0000 UTC m=+0.071523683 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:06:44 compute-0 nova_compute[189381]: 2025-11-25 11:06:44.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:06:44 compute-0 nova_compute[189381]: 2025-11-25 11:06:44.956 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:45 compute-0 nova_compute[189381]: 2025-11-25 11:06:45.266 189385 INFO nova.compute.manager [None req-fd2e2342-7d58-45d3-94c9-d9179d885b80 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Get console output
Nov 25 11:06:45 compute-0 nova_compute[189381]: 2025-11-25 11:06:45.368 239472 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 11:06:45 compute-0 podman[256291]: 2025-11-25 11:06:45.959397546 +0000 UTC m=+0.073116030 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, container_name=kepler, vcs-type=git)
Nov 25 11:06:48 compute-0 nova_compute[189381]: 2025-11-25 11:06:48.568 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:48 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:06:48.887 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:06:48 compute-0 nova_compute[189381]: 2025-11-25 11:06:48.956 189385 DEBUG nova.compute.manager [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-changed-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:06:48 compute-0 nova_compute[189381]: 2025-11-25 11:06:48.957 189385 DEBUG nova.compute.manager [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Refreshing instance network info cache due to event network-changed-e66646b4-49f7-478f-a2c1-e76f91c0dcb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:06:48 compute-0 nova_compute[189381]: 2025-11-25 11:06:48.958 189385 DEBUG oslo_concurrency.lockutils [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:06:48 compute-0 nova_compute[189381]: 2025-11-25 11:06:48.958 189385 DEBUG oslo_concurrency.lockutils [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:06:48 compute-0 nova_compute[189381]: 2025-11-25 11:06:48.959 189385 DEBUG nova.network.neutron [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Refreshing network info cache for port e66646b4-49f7-478f-a2c1-e76f91c0dcb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:06:49 compute-0 nova_compute[189381]: 2025-11-25 11:06:49.960 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:50 compute-0 nova_compute[189381]: 2025-11-25 11:06:50.919 189385 DEBUG nova.network.neutron [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updated VIF entry in instance network info cache for port e66646b4-49f7-478f-a2c1-e76f91c0dcb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:06:50 compute-0 nova_compute[189381]: 2025-11-25 11:06:50.920 189385 DEBUG nova.network.neutron [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updating instance_info_cache with network_info: [{"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:06:51 compute-0 nova_compute[189381]: 2025-11-25 11:06:51.164 189385 DEBUG oslo_concurrency.lockutils [req-952806ca-dc61-483d-b272-bb4f6e506ddb req-5d677612-ad19-4460-bd2f-2f9440084acb d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:06:53 compute-0 nova_compute[189381]: 2025-11-25 11:06:53.570 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:53 compute-0 podman[256312]: 2025-11-25 11:06:53.954071894 +0000 UTC m=+0.064737987 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:06:54 compute-0 nova_compute[189381]: 2025-11-25 11:06:54.964 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:56 compute-0 podman[256331]: 2025-11-25 11:06:56.958268345 +0000 UTC m=+0.070098932 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Nov 25 11:06:56 compute-0 podman[256332]: 2025-11-25 11:06:56.979066118 +0000 UTC m=+0.089973878 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:06:58 compute-0 nova_compute[189381]: 2025-11-25 11:06:58.574 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.667 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.668 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:59 compute-0 podman[203557]: time="2025-11-25T11:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:06:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31989 "" "Go-http-client/1.1"
Nov 25 11:06:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5736 "" "Go-http-client/1.1"
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.756 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.909 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.910 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.920 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.922 189385 INFO nova.compute.claims [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:06:59 compute-0 nova_compute[189381]: 2025-11-25 11:06:59.968 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.113 189385 DEBUG nova.compute.provider_tree [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.125 189385 DEBUG nova.scheduler.client.report [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.238 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.240 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.302 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.303 189385 DEBUG nova.network.neutron [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.330 189385 INFO nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.377 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.494 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.496 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.497 189385 INFO nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Creating image(s)
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.498 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "/var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.498 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "/var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.499 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "/var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.515 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.575 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.577 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "5e1076775cb022823267aba8feacfddb7ab1429b" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.578 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.592 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.650 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.652 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.723 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b,backing_fmt=raw /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk 1073741824" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.725 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "5e1076775cb022823267aba8feacfddb7ab1429b" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.726 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.786 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5e1076775cb022823267aba8feacfddb7ab1429b --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.789 189385 DEBUG nova.virt.disk.api [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Checking if we can resize image /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.789 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.850 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.852 189385 DEBUG nova.virt.disk.api [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Cannot resize image /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.853 189385 DEBUG nova.objects.instance [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lazy-loading 'migration_context' on Instance uuid 078c0d57-6a60-4ffc-b196-332f00f1051b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.869 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.870 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Ensure instance console log exists: /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.870 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.871 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:00 compute-0 nova_compute[189381]: 2025-11-25 11:07:00.872 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:01 compute-0 nova_compute[189381]: 2025-11-25 11:07:01.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:01 compute-0 nova_compute[189381]: 2025-11-25 11:07:01.161 189385 DEBUG nova.policy [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '97d307f20103434babe2431661f5bbdb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '89069d3ee96a4fd493232b094a94877d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:07:01 compute-0 openstack_network_exporter[205722]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:07:01 compute-0 openstack_network_exporter[205722]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:07:01 compute-0 openstack_network_exporter[205722]: ERROR   11:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:07:01 compute-0 openstack_network_exporter[205722]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:07:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:07:01 compute-0 openstack_network_exporter[205722]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:07:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:07:01 compute-0 anacron[30897]: Job `cron.monthly' started
Nov 25 11:07:01 compute-0 anacron[30897]: Job `cron.monthly' terminated
Nov 25 11:07:01 compute-0 anacron[30897]: Normal exit (3 jobs run)
Nov 25 11:07:01 compute-0 podman[256390]: 2025-11-25 11:07:01.811809515 +0000 UTC m=+0.070912726 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:07:01 compute-0 podman[256389]: 2025-11-25 11:07:01.847414967 +0000 UTC m=+0.109785992 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:03.217 106741 DEBUG eventlet.wsgi.server [-] (106741) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:03.219 106741 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: Accept: */*
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: Connection: close
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: Content-Type: text/plain
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: Host: 169.254.169.254
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: User-Agent: curl/7.84.0
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: X-Forwarded-For: 10.100.0.7
Nov 25 11:07:03 compute-0 ovn_metadata_agent[106629]: X-Ovn-Network-Id: 5a488783-81eb-4a79-a4fc-78987bdf65c9 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.342 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.343 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.352 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'name': 'te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.354 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 11:07:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:03.355 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 11:07:03 compute-0 nova_compute[189381]: 2025-11-25 11:07:03.578 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:04 compute-0 nova_compute[189381]: 2025-11-25 11:07:04.436 189385 DEBUG nova.network.neutron [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Successfully created port: 12f4cfde-a94c-4c66-a066-f073dabfcb90 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:07:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:04.631 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1852 Content-Type: application/json Date: Tue, 25 Nov 2025 11:07:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7942976d-c9cf-4cab-91d9-e0234f0f5fc3 x-openstack-request-id: req-7942976d-c9cf-4cab-91d9-e0234f0f5fc3 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 11:07:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:04.631 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f", "name": "tempest-TestNetworkBasicOps-server-401290240", "status": "ACTIVE", "tenant_id": "89069d3ee96a4fd493232b094a94877d", "user_id": "97d307f20103434babe2431661f5bbdb", "metadata": {}, "hostId": "167fe0d1ea770186a4639a150fe952cdf244810514a59cb90fb37675", "image": {"id": "b388f0fb-bd04-4296-928b-44c706e0493e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b388f0fb-bd04-4296-928b-44c706e0493e"}]}, "flavor": {"id": "b7c0626e-febc-4083-b621-6f5ee0740a18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b7c0626e-febc-4083-b621-6f5ee0740a18"}]}, "created": "2025-11-25T11:05:24Z", "updated": "2025-11-25T11:06:02Z", "addresses": {"tempest-network-smoke--1505779129": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:05:ce:5c"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-1314646098", "OS-SRV-USG:launched_at": "2025-11-25T11:06:02.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-123225581"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 11:07:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:04.632 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f used request id req-7942976d-c9cf-4cab-91d9-e0234f0f5fc3 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 11:07:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:04.633 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f', 'name': 'tempest-TestNetworkBasicOps-server-401290240', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '89069d3ee96a4fd493232b094a94877d', 'user_id': '97d307f20103434babe2431661f5bbdb', 'hostId': '167fe0d1ea770186a4639a150fe952cdf244810514a59cb90fb37675', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:07:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:04.635 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 74072f60-1884-462d-9a69-28925a67978d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 11:07:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:04.636 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/74072f60-1884-462d-9a69-28925a67978d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 11:07:04 compute-0 nova_compute[189381]: 2025-11-25 11:07:04.971 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:05.088 106741 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:05.089 106741 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.8701839
Nov 25 11:07:05 compute-0 haproxy-metadata-proxy-5a488783-81eb-4a79-a4fc-78987bdf65c9[255930]: 10.100.0.7:37522 [25/Nov/2025:11:07:03.215] listener listener/metadata 0/0/0/1873/1873 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:05.168 106741 DEBUG eventlet.wsgi.server [-] (106741) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:05.170 106741 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: Accept: */*
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: Connection: close
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: Content-Length: 100
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: Content-Type: application/x-www-form-urlencoded
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: Host: 169.254.169.254
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: User-Agent: curl/7.84.0
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: X-Forwarded-For: 10.100.0.7
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: X-Ovn-Network-Id: 5a488783-81eb-4a79-a4fc-78987bdf65c9
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: 
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Nov 25 11:07:05 compute-0 haproxy-metadata-proxy-5a488783-81eb-4a79-a4fc-78987bdf65c9[255930]: 10.100.0.7:37528 [25/Nov/2025:11:07:05.167] listener listener/metadata 0/0/0/760/760 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:05.927 106741 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Nov 25 11:07:05 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:05.927 106741 INFO eventlet.wsgi.server [-] 10.100.0.7,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.7577651
Nov 25 11:07:05 compute-0 podman[256432]: 2025-11-25 11:07:05.950421757 +0000 UTC m=+0.067302581 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.001 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2082 Content-Type: application/json Date: Tue, 25 Nov 2025 11:07:04 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-5da5a7f1-f10e-4f74-813f-15e4804d2a17 x-openstack-request-id: req-5da5a7f1-f10e-4f74-813f-15e4804d2a17 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.001 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "74072f60-1884-462d-9a69-28925a67978d", "name": "tempest-TestServerBasicOps-server-671773331", "status": "ACTIVE", "tenant_id": "6daca89a9f274580a80130a94ea91f45", "user_id": "09f4a560d6494ec3aa4e1a291f7917c1", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "c335f9eda92b13266fc299ddd3aebca02b215ac62c72ab48664e55b7", "image": {"id": "b388f0fb-bd04-4296-928b-44c706e0493e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/b388f0fb-bd04-4296-928b-44c706e0493e"}]}, "flavor": {"id": "b7c0626e-febc-4083-b621-6f5ee0740a18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b7c0626e-febc-4083-b621-6f5ee0740a18"}]}, "created": "2025-11-25T11:05:29Z", "updated": "2025-11-25T11:05:56Z", "addresses": {"tempest-TestServerBasicOps-566008335-network": [{"version": 4, "addr": "10.100.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5a:ef:83"}, {"version": 4, "addr": "192.168.122.180", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:5a:ef:83"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/74072f60-1884-462d-9a69-28925a67978d"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/74072f60-1884-462d-9a69-28925a67978d"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-1049920664", "OS-SRV-USG:launched_at": "2025-11-25T11:05:56.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1283938475"}, {"name": "tempest-secgroup-smoke-583582113"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000d", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.001 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/74072f60-1884-462d-9a69-28925a67978d used request id req-5da5a7f1-f10e-4f74-813f-15e4804d2a17 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.002 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '74072f60-1884-462d-9a69-28925a67978d', 'name': 'tempest-TestServerBasicOps-server-671773331', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000d', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6daca89a9f274580a80130a94ea91f45', 'user_id': '09f4a560d6494ec3aa4e1a291f7917c1', 'hostId': 'c335f9eda92b13266fc299ddd3aebca02b215ac62c72ab48664e55b7', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.003 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.003 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.004 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:07:06.003536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.008 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.011 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f / tape66646b4-49 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.011 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.outgoing.bytes volume: 15886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.015 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 74072f60-1884-462d-9a69-28925a67978d / tap086b3bc6-2c inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.015 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.outgoing.bytes volume: 35210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.015 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.015 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.015 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.016 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.016 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.016 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.016 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.016 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.017 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:07:06.016116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.018 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:07:06.017832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.049 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/memory.usage volume: 42.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.071 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.093 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/memory.usage volume: 43.1796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.096 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T11:07:06.096204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.096 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-401290240>, <NovaLikeServer: tempest-TestServerBasicOps-server-671773331>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-401290240>, <NovaLikeServer: tempest-TestServerBasicOps-server-671773331>]
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.097 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.097 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.097 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.097 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.098 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:07:06.098131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.098 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.098 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.incoming.bytes volume: 20170 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.099 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.incoming.bytes volume: 15763 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.100 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.100 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:07:06.101180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.101 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.101 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.outgoing.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.102 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.outgoing.packets volume: 142 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.103 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.104 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.104 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:07:06.104204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.104 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.104 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.105 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.106 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.106 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.107 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/cpu volume: 128070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:07:06.107203) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.108 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/cpu volume: 36250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.108 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/cpu volume: 31160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.109 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.110 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:07:06.110363) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.110 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.110 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.111 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.112 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:07:06.113312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.132 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.132 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.147 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.148 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.163 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.164 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.165 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:07:06.166204) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.210 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.211 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.271 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.read.bytes volume: 31091200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.272 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.310 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.read.bytes volume: 29714944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.311 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.313 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.314 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 1600810847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:07:06.314075) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.314 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 68341060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.315 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.read.latency volume: 954019581 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.315 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.read.latency volume: 68552594 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.315 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.read.latency volume: 845441016 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.316 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.read.latency volume: 144909778 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.317 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.317 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:07:06.317378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.318 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.318 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.read.requests volume: 1141 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.318 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.318 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.read.requests volume: 1067 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.319 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.319 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.320 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:07:06.320208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.320 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.321 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.321 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.321 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.321 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.323 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:07:06.323040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.323 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.323 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.write.bytes volume: 72953856 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.324 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.324 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.write.bytes volume: 73011200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.324 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 10464762727 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:07:06.325487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.325 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.326 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.write.latency volume: 3232838148 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.326 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.326 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.write.latency volume: 3546774033 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.326 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.327 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.328 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:07:06.327451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.329 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 313 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.329 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.329 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.write.requests volume: 298 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.329 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.329 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.write.requests volume: 331 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.330 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.330 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.331 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes.delta volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.331 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.331 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:07:06.328899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:07:06.330969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.332 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.333 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:07:06.332376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.333 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.333 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T11:07:06.334397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-401290240>, <NovaLikeServer: tempest-TestServerBasicOps-server-671773331>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-401290240>, <NovaLikeServer: tempest-TestServerBasicOps-server-671773331>]
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.335 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:07:06.335267) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:07:06.336181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.336 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.incoming.packets volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.337 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:07:06.337475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.338 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.339 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:07:06.338787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.339 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:07:06.340383) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.340 14 DEBUG ceilometer.compute.pollsters [-] b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.341 14 DEBUG ceilometer.compute.pollsters [-] 74072f60-1884-462d-9a69-28925a67978d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:07:06.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.793 189385 DEBUG nova.network.neutron [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Successfully updated port: 12f4cfde-a94c-4c66-a066-f073dabfcb90 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.839 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.839 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquired lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.840 189385 DEBUG nova.network.neutron [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.882 189385 DEBUG nova.compute.manager [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-changed-12f4cfde-a94c-4c66-a066-f073dabfcb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.883 189385 DEBUG nova.compute.manager [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Refreshing instance network info cache due to event network-changed-12f4cfde-a94c-4c66-a066-f073dabfcb90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.883 189385 DEBUG oslo_concurrency.lockutils [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:07:06 compute-0 nova_compute[189381]: 2025-11-25 11:07:06.968 189385 DEBUG nova.network.neutron [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.897 189385 DEBUG nova.network.neutron [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Updating instance_info_cache with network_info: [{"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.927 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Releasing lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.928 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Instance network_info: |[{"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.929 189385 DEBUG oslo_concurrency.lockutils [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.929 189385 DEBUG nova.network.neutron [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Refreshing network info cache for port 12f4cfde-a94c-4c66-a066-f073dabfcb90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.932 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Start _get_guest_xml network_info=[{"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': 'b388f0fb-bd04-4296-928b-44c706e0493e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.939 189385 WARNING nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.952 189385 DEBUG nova.virt.libvirt.host [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.953 189385 DEBUG nova.virt.libvirt.host [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.961 189385 DEBUG nova.virt.libvirt.host [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.963 189385 DEBUG nova.virt.libvirt.host [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.963 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.964 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T10:59:15Z,direct_url=<?>,disk_format='qcow2',id=b388f0fb-bd04-4296-928b-44c706e0493e,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='aef0c6ba1dd54218a527ced3f8d2a1be',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T10:59:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.965 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.965 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.966 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.966 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.967 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.967 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.968 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.968 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.969 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.969 189385 DEBUG nova.virt.hardware [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.974 189385 DEBUG nova.virt.libvirt.vif [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:06:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2121031331',display_name='tempest-TestNetworkBasicOps-server-2121031331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2121031331',id=14,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJD8tgqgcb1AhoxRl1LsWSz8grSSR7HvFac3Gue31bq68PmaTfpqqvQ1Nzp3FpFe1yJgfkctH98TQri7uN5cvNDX8sS2K+xsbRfTATbNFzm68iWSYI7bJKjqVjeVDfnbYA==',key_name='tempest-TestNetworkBasicOps-1223615685',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89069d3ee96a4fd493232b094a94877d',ramdisk_id='',reservation_id='r-03dq3w90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-448137458',owner_user_name='tempest-TestNetworkBasicOps-448137458-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:07:00Z,user_data=None,user_id='97d307f20103434babe2431661f5bbdb',uuid=078c0d57-6a60-4ffc-b196-332f00f1051b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.975 189385 DEBUG nova.network.os_vif_util [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converting VIF {"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.976 189385 DEBUG nova.network.os_vif_util [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ba:b8,bridge_name='br-int',has_traffic_filtering=True,id=12f4cfde-a94c-4c66-a066-f073dabfcb90,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12f4cfde-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.977 189385 DEBUG nova.objects.instance [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lazy-loading 'pci_devices' on Instance uuid 078c0d57-6a60-4ffc-b196-332f00f1051b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.991 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <uuid>078c0d57-6a60-4ffc-b196-332f00f1051b</uuid>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <name>instance-0000000e</name>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <nova:name>tempest-TestNetworkBasicOps-server-2121031331</nova:name>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:07:07</nova:creationTime>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:user uuid="97d307f20103434babe2431661f5bbdb">tempest-TestNetworkBasicOps-448137458-project-member</nova:user>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:project uuid="89069d3ee96a4fd493232b094a94877d">tempest-TestNetworkBasicOps-448137458</nova:project>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="b388f0fb-bd04-4296-928b-44c706e0493e"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         <nova:port uuid="12f4cfde-a94c-4c66-a066-f073dabfcb90">
Nov 25 11:07:07 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <system>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <entry name="serial">078c0d57-6a60-4ffc-b196-332f00f1051b</entry>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <entry name="uuid">078c0d57-6a60-4ffc-b196-332f00f1051b</entry>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </system>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <os>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   </os>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <features>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   </features>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk.config"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:f4:ba:b8"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <target dev="tap12f4cfde-a9"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/console.log" append="off"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <video>
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </video>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:07:07 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:07:07 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:07:07 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:07:07 compute-0 nova_compute[189381]: </domain>
Nov 25 11:07:07 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.993 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Preparing to wait for external event network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.993 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.994 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.994 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.995 189385 DEBUG nova.virt.libvirt.vif [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:06:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2121031331',display_name='tempest-TestNetworkBasicOps-server-2121031331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2121031331',id=14,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJD8tgqgcb1AhoxRl1LsWSz8grSSR7HvFac3Gue31bq68PmaTfpqqvQ1Nzp3FpFe1yJgfkctH98TQri7uN5cvNDX8sS2K+xsbRfTATbNFzm68iWSYI7bJKjqVjeVDfnbYA==',key_name='tempest-TestNetworkBasicOps-1223615685',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='89069d3ee96a4fd493232b094a94877d',ramdisk_id='',reservation_id='r-03dq3w90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-448137458',owner_user_name='tempest-TestNetworkBasicOps-448137458-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:07:00Z,user_data=None,user_id='97d307f20103434babe2431661f5bbdb',uuid=078c0d57-6a60-4ffc-b196-332f00f1051b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.996 189385 DEBUG nova.network.os_vif_util [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converting VIF {"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.997 189385 DEBUG nova.network.os_vif_util [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ba:b8,bridge_name='br-int',has_traffic_filtering=True,id=12f4cfde-a94c-4c66-a066-f073dabfcb90,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12f4cfde-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.997 189385 DEBUG os_vif [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ba:b8,bridge_name='br-int',has_traffic_filtering=True,id=12f4cfde-a94c-4c66-a066-f073dabfcb90,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12f4cfde-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.998 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:07 compute-0 nova_compute[189381]: 2025-11-25 11:07:07.999 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.000 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.004 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.004 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12f4cfde-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.005 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap12f4cfde-a9, col_values=(('external_ids', {'iface-id': '12f4cfde-a94c-4c66-a066-f073dabfcb90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:ba:b8', 'vm-uuid': '078c0d57-6a60-4ffc-b196-332f00f1051b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:08 compute-0 NetworkManager[56317]: <info>  [1764068828.0092] manager: (tap12f4cfde-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.008 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.015 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.019 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.020 189385 INFO os_vif [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:ba:b8,bridge_name='br-int',has_traffic_filtering=True,id=12f4cfde-a94c-4c66-a066-f073dabfcb90,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12f4cfde-a9')
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.077 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.078 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.078 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] No VIF found with MAC fa:16:3e:f4:ba:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.079 189385 INFO nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Using config drive
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.296 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.297 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.297 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.297 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.297 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.299 189385 INFO nova.compute.manager [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Terminating instance
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.300 189385 DEBUG nova.compute.manager [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:07:08 compute-0 kernel: tap086b3bc6-2c (unregistering): left promiscuous mode
Nov 25 11:07:08 compute-0 NetworkManager[56317]: <info>  [1764068828.3372] device (tap086b3bc6-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.349 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00168|binding|INFO|Releasing lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 from this chassis (sb_readonly=0)
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00169|binding|INFO|Setting lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 down in Southbound
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00170|binding|INFO|Removing iface tap086b3bc6-2c ovn-installed in OVS
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.354 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.373 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 25 11:07:08 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 37.425s CPU time.
Nov 25 11:07:08 compute-0 systemd-machined[155706]: Machine qemu-14-instance-0000000d terminated.
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.458 189385 INFO nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Creating config drive at /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk.config
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.463 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqywpyhyb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:08.473 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:ef:83 10.100.0.7'], port_security=['fa:16:3e:5a:ef:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '74072f60-1884-462d-9a69-28925a67978d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6daca89a9f274580a80130a94ea91f45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8ed7556-296c-4f8a-8d14-b7db687fcc5d d6b174b5-3e6d-4fce-b47c-4c0b0e953e7c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b5f4bb0-b48a-4dd3-b95b-544c18545f75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=086b3bc6-2c46-45d0-bc3e-f02fd307fe64) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:07:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:08.474 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 in datapath 5a488783-81eb-4a79-a4fc-78987bdf65c9 unbound from our chassis
Nov 25 11:07:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:08.476 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5a488783-81eb-4a79-a4fc-78987bdf65c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:07:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:08.477 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[814dec79-03ad-49ce-8282-73ba99db36da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:08.478 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9 namespace which is not needed anymore
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.580 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.593 189385 DEBUG oslo_concurrency.processutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqywpyhyb" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:08 compute-0 kernel: tap086b3bc6-2c: entered promiscuous mode
Nov 25 11:07:08 compute-0 kernel: tap086b3bc6-2c (unregistering): left promiscuous mode
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00171|binding|INFO|Claiming lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 for this chassis.
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00172|binding|INFO|086b3bc6-2c46-45d0-bc3e-f02fd307fe64: Claiming fa:16:3e:5a:ef:83 10.100.0.7
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.756 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00173|binding|INFO|Setting lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 ovn-installed in OVS
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00174|if_status|INFO|Dropped 1 log messages in last 281 seconds (most recently, 281 seconds ago) due to excessive rate
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00175|if_status|INFO|Not setting lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 down as sb is readonly
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.786 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.820 189385 INFO nova.virt.libvirt.driver [-] [instance: 74072f60-1884-462d-9a69-28925a67978d] Instance destroyed successfully.
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.822 189385 DEBUG nova.objects.instance [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lazy-loading 'resources' on Instance uuid 74072f60-1884-462d-9a69-28925a67978d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.835 189385 DEBUG nova.virt.libvirt.vif [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:05:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-671773331',display_name='tempest-TestServerBasicOps-server-671773331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-671773331',id=13,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCm6h7gMXH3DYHNr5rdS+vbtvUVOkFXXJQVtLcM0GmrbK0AYY4Se5XWSLFwYlIxzP88Cl3TVscoHCphvEWXJNl+yg8pdZ5IvlZoWt0z45Iz6VKseG1WovCCMsAylx+LTkg==',key_name='tempest-TestServerBasicOps-1049920664',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:05:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6daca89a9f274580a80130a94ea91f45',ramdisk_id='',reservation_id='r-cil2kb8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-382705340',owner_user_name='tempest-TestServerBasicOps-382705340-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:07:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='09f4a560d6494ec3aa4e1a291f7917c1',uuid=74072f60-1884-462d-9a69-28925a67978d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.836 189385 DEBUG nova.network.os_vif_util [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Converting VIF {"id": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "address": "fa:16:3e:5a:ef:83", "network": {"id": "5a488783-81eb-4a79-a4fc-78987bdf65c9", "bridge": "br-int", "label": "tempest-TestServerBasicOps-566008335-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6daca89a9f274580a80130a94ea91f45", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap086b3bc6-2c", "ovs_interfaceid": "086b3bc6-2c46-45d0-bc3e-f02fd307fe64", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.837 189385 DEBUG nova.network.os_vif_util [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5a:ef:83,bridge_name='br-int',has_traffic_filtering=True,id=086b3bc6-2c46-45d0-bc3e-f02fd307fe64,network=Network(5a488783-81eb-4a79-a4fc-78987bdf65c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086b3bc6-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.838 189385 DEBUG os_vif [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:ef:83,bridge_name='br-int',has_traffic_filtering=True,id=086b3bc6-2c46-45d0-bc3e-f02fd307fe64,network=Network(5a488783-81eb-4a79-a4fc-78987bdf65c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086b3bc6-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.840 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.841 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap086b3bc6-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:08 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [NOTICE]   (255928) : haproxy version is 2.8.14-c23fe91
Nov 25 11:07:08 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [NOTICE]   (255928) : path to executable is /usr/sbin/haproxy
Nov 25 11:07:08 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [WARNING]  (255928) : Exiting Master process...
Nov 25 11:07:08 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [WARNING]  (255928) : Exiting Master process...
Nov 25 11:07:08 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [ALERT]    (255928) : Current worker (255930) exited with code 143 (Terminated)
Nov 25 11:07:08 compute-0 neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9[255924]: [WARNING]  (255928) : All workers exited. Exiting... (0)
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.849 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.851 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:07:08 compute-0 systemd[1]: libpod-cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a.scope: Deactivated successfully.
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.854 189385 INFO os_vif [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:ef:83,bridge_name='br-int',has_traffic_filtering=True,id=086b3bc6-2c46-45d0-bc3e-f02fd307fe64,network=Network(5a488783-81eb-4a79-a4fc-78987bdf65c9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap086b3bc6-2c')
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.855 189385 INFO nova.virt.libvirt.driver [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Deleting instance files /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d_del
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.855 189385 INFO nova.virt.libvirt.driver [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Deletion of /var/lib/nova/instances/74072f60-1884-462d-9a69-28925a67978d_del complete
Nov 25 11:07:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:08.856 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:ef:83 10.100.0.7'], port_security=['fa:16:3e:5a:ef:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '74072f60-1884-462d-9a69-28925a67978d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6daca89a9f274580a80130a94ea91f45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8ed7556-296c-4f8a-8d14-b7db687fcc5d d6b174b5-3e6d-4fce-b47c-4c0b0e953e7c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b5f4bb0-b48a-4dd3-b95b-544c18545f75, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=086b3bc6-2c46-45d0-bc3e-f02fd307fe64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00176|binding|INFO|Releasing lport 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 from this chassis (sb_readonly=0)
Nov 25 11:07:08 compute-0 podman[256483]: 2025-11-25 11:07:08.860119051 +0000 UTC m=+0.267679727 container died cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:07:08 compute-0 kernel: tap12f4cfde-a9: entered promiscuous mode
Nov 25 11:07:08 compute-0 systemd-udevd[256461]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:07:08 compute-0 NetworkManager[56317]: <info>  [1764068828.8683] manager: (tap12f4cfde-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00177|if_status|INFO|Not updating pb chassis for 12f4cfde-a94c-4c66-a066-f073dabfcb90 now as sb is readonly
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.882 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 NetworkManager[56317]: <info>  [1764068828.8869] device (tap12f4cfde-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:07:08 compute-0 NetworkManager[56317]: <info>  [1764068828.8878] device (tap12f4cfde-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.905 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.908 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.914 189385 DEBUG nova.compute.manager [req-86551b29-8afd-4d0e-9503-dd8046d37976 req-d80fe5cd-5673-46ff-9e58-397fb8aa4ba7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-vif-unplugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.914 189385 DEBUG oslo_concurrency.lockutils [req-86551b29-8afd-4d0e-9503-dd8046d37976 req-d80fe5cd-5673-46ff-9e58-397fb8aa4ba7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.915 189385 DEBUG oslo_concurrency.lockutils [req-86551b29-8afd-4d0e-9503-dd8046d37976 req-d80fe5cd-5673-46ff-9e58-397fb8aa4ba7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.915 189385 DEBUG oslo_concurrency.lockutils [req-86551b29-8afd-4d0e-9503-dd8046d37976 req-d80fe5cd-5673-46ff-9e58-397fb8aa4ba7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.915 189385 DEBUG nova.compute.manager [req-86551b29-8afd-4d0e-9503-dd8046d37976 req-d80fe5cd-5673-46ff-9e58-397fb8aa4ba7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] No waiting events found dispatching network-vif-unplugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.915 189385 DEBUG nova.compute.manager [req-86551b29-8afd-4d0e-9503-dd8046d37976 req-d80fe5cd-5673-46ff-9e58-397fb8aa4ba7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-vif-unplugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:07:08 compute-0 systemd-machined[155706]: New machine qemu-15-instance-0000000e.
Nov 25 11:07:08 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00178|binding|INFO|Claiming lport 12f4cfde-a94c-4c66-a066-f073dabfcb90 for this chassis.
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00179|binding|INFO|12f4cfde-a94c-4c66-a066-f073dabfcb90: Claiming fa:16:3e:f4:ba:b8 10.100.0.7
Nov 25 11:07:08 compute-0 ovn_controller[97779]: 2025-11-25T11:07:08Z|00180|binding|INFO|Setting lport 12f4cfde-a94c-4c66-a066-f073dabfcb90 ovn-installed in OVS
Nov 25 11:07:08 compute-0 nova_compute[189381]: 2025-11-25 11:07:08.961 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:08 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:08.964 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:ef:83 10.100.0.7'], port_security=['fa:16:3e:5a:ef:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '74072f60-1884-462d-9a69-28925a67978d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6daca89a9f274580a80130a94ea91f45', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8ed7556-296c-4f8a-8d14-b7db687fcc5d d6b174b5-3e6d-4fce-b47c-4c0b0e953e7c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5b5f4bb0-b48a-4dd3-b95b-544c18545f75, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=086b3bc6-2c46-45d0-bc3e-f02fd307fe64) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.001 189385 INFO nova.compute.manager [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Took 0.70 seconds to destroy the instance on the hypervisor.
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.002 189385 DEBUG oslo.service.loopingcall [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.003 189385 DEBUG nova.compute.manager [-] [instance: 74072f60-1884-462d-9a69-28925a67978d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.004 189385 DEBUG nova.network.neutron [-] [instance: 74072f60-1884-462d-9a69-28925a67978d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a-userdata-shm.mount: Deactivated successfully.
Nov 25 11:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e8ef83a0850ec62cee7872b5a1f79490aa7d80bb176fc0aca15a6b203f6eb22-merged.mount: Deactivated successfully.
Nov 25 11:07:09 compute-0 ovn_controller[97779]: 2025-11-25T11:07:09Z|00181|binding|INFO|Setting lport 12f4cfde-a94c-4c66-a066-f073dabfcb90 up in Southbound
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.063 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:ba:b8 10.100.0.7'], port_security=['fa:16:3e:f4:ba:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '078c0d57-6a60-4ffc-b196-332f00f1051b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89069d3ee96a4fd493232b094a94877d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2a5f38a9-6d54-453a-8b93-bf4c5cd9a215', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=feead41c-bd30-4d7d-b182-8bed9968ffc7, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=12f4cfde-a94c-4c66-a066-f073dabfcb90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:07:09 compute-0 podman[256483]: 2025-11-25 11:07:09.136448728 +0000 UTC m=+0.544009404 container cleanup cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:07:09 compute-0 systemd[1]: libpod-conmon-cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a.scope: Deactivated successfully.
Nov 25 11:07:09 compute-0 podman[256551]: 2025-11-25 11:07:09.316579638 +0000 UTC m=+0.138315469 container remove cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.334 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[9305d423-69f2-4caa-b3e5-1f8f7d478a3e]: (4, ('Tue Nov 25 11:07:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9 (cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a)\ncfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a\nTue Nov 25 11:07:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9 (cfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a)\ncfd29c29498b63b6a46663d22b91c359b97272f470433574ac22c0c19f00481a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.336 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[1c410053-b882-48aa-87d5-82d5aee2d45c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.336 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a488783-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:09 compute-0 kernel: tap5a488783-80: left promiscuous mode
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.338 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.355 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.356 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d793b1-6565-4fb2-890f-7ca1d334e894]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.378 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e331266c-76f6-4b66-9cb4-693c7e630f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.379 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[95fa71a8-07c8-4433-9cc4-a70f1a3202d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.394 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2edc99a0-c27b-465a-85d5-6d1603067701]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566367, 'reachable_time': 36137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256564, 'error': None, 'target': 'ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d5a488783\x2d81eb\x2d4a79\x2da4fc\x2d78987bdf65c9.mount: Deactivated successfully.
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.399 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5a488783-81eb-4a79-a4fc-78987bdf65c9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.399 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[6c77b4c0-5fdc-455a-beb4-8e17a4271d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.402 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 in datapath 5a488783-81eb-4a79-a4fc-78987bdf65c9 unbound from our chassis
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.404 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5a488783-81eb-4a79-a4fc-78987bdf65c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.405 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0be8ca2d-8565-48bb-83c3-b526f3722f0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.406 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 086b3bc6-2c46-45d0-bc3e-f02fd307fe64 in datapath 5a488783-81eb-4a79-a4fc-78987bdf65c9 unbound from our chassis
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.408 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5a488783-81eb-4a79-a4fc-78987bdf65c9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.409 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[aa54f06d-6989-4e97-abb3-f48af65d7831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.409 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 12f4cfde-a94c-4c66-a066-f073dabfcb90 in datapath a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 unbound from our chassis
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.411 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.428 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[606b5f25-d5bb-4784-932b-198a18fac3c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.461 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[24147a39-8cd2-48ef-a054-7f3489478b6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.465 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[c3da9aab-28ac-400f-aeb9-204202851e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.500 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[e589edaa-ae80-4faf-a395-2faf621b689e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.516 189385 DEBUG nova.network.neutron [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Updated VIF entry in instance network info cache for port 12f4cfde-a94c-4c66-a066-f073dabfcb90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.516 189385 DEBUG nova.network.neutron [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Updating instance_info_cache with network_info: [{"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.519 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b99ad5-64e8-485c-bf48-c9c7d650fa59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6f834aa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:f4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565744, 'reachable_time': 30685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256570, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.536 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9712b9-3baa-42ed-adba-fc1d1cb69ab0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa6f834aa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565756, 'tstamp': 565756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256572, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa6f834aa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565759, 'tstamp': 565759}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256572, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.538 189385 DEBUG oslo_concurrency.lockutils [req-df7d4740-95b6-4ff8-adc8-03ae98eeeb78 req-f673b913-3db7-4050-9019-a0a93658b711 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.539 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6f834aa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.540 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:09 compute-0 nova_compute[189381]: 2025-11-25 11:07:09.542 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.544 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6f834aa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.544 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.544 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa6f834aa-d0, col_values=(('external_ids', {'iface-id': '702441f8-9440-4a38-a0f0-225d972b0155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:09 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:09.545 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.268 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068830.2677245, 078c0d57-6a60-4ffc-b196-332f00f1051b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.268 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] VM Started (Lifecycle Event)
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.287 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.293 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068830.2678235, 078c0d57-6a60-4ffc-b196-332f00f1051b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.293 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] VM Paused (Lifecycle Event)
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.308 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.313 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:07:10 compute-0 nova_compute[189381]: 2025-11-25 11:07:10.335 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.997 189385 DEBUG nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.997 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "74072f60-1884-462d-9a69-28925a67978d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.997 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.997 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.998 189385 DEBUG nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] No waiting events found dispatching network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.998 189385 WARNING nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received unexpected event network-vif-plugged-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 for instance with vm_state active and task_state deleting.
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.998 189385 DEBUG nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.998 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.998 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.999 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.999 189385 DEBUG nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Processing event network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.999 189385 DEBUG nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.999 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:11 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.999 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.999 189385 DEBUG oslo_concurrency.lockutils [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:11.999 189385 DEBUG nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] No waiting events found dispatching network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.000 189385 WARNING nova.compute.manager [req-7a48e409-3ffa-4d6e-99b6-2285c978a59b req-2cf50567-4353-4ab3-a6b8-eb869a2c0b82 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received unexpected event network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 for instance with vm_state building and task_state spawning.
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.000 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.004 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.006 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068832.005697, 078c0d57-6a60-4ffc-b196-332f00f1051b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.006 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] VM Resumed (Lifecycle Event)
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.011 189385 INFO nova.virt.libvirt.driver [-] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Instance spawned successfully.
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.012 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.031 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.037 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.037 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.038 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.038 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.038 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.039 189385 DEBUG nova.virt.libvirt.driver [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.045 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.069 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.377 189385 INFO nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Took 11.88 seconds to spawn the instance on the hypervisor.
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.378 189385 DEBUG nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.521 189385 INFO nova.compute.manager [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Took 12.64 seconds to build instance.
Nov 25 11:07:12 compute-0 nova_compute[189381]: 2025-11-25 11:07:12.581 189385 DEBUG oslo_concurrency.lockutils [None req-d7e9b128-78ec-408f-a976-18c6203b7d88 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.146 189385 DEBUG nova.network.neutron [-] [instance: 74072f60-1884-462d-9a69-28925a67978d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.238 189385 INFO nova.compute.manager [-] [instance: 74072f60-1884-462d-9a69-28925a67978d] Took 4.23 seconds to deallocate network for instance.
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.343 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.344 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.476 189385 DEBUG nova.compute.provider_tree [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.495 189385 DEBUG nova.scheduler.client.report [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.584 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.625 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.633 189385 DEBUG nova.compute.manager [req-02be5a79-1f12-4911-a81a-48f7545cb6af req-9c85a9f4-2d87-4ac0-9903-bbd2899a44af d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 74072f60-1884-462d-9a69-28925a67978d] Received event network-vif-deleted-086b3bc6-2c46-45d0-bc3e-f02fd307fe64 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.714 189385 INFO nova.scheduler.client.report [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Deleted allocations for instance 74072f60-1884-462d-9a69-28925a67978d
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.844 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:13 compute-0 nova_compute[189381]: 2025-11-25 11:07:13.945 189385 DEBUG oslo_concurrency.lockutils [None req-a1e0cc88-23ce-4f6c-b6b2-0d96d29d2dc4 09f4a560d6494ec3aa4e1a291f7917c1 6daca89a9f274580a80130a94ea91f45 - - default default] Lock "74072f60-1884-462d-9a69-28925a67978d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:14 compute-0 podman[256581]: 2025-11-25 11:07:14.773871391 +0000 UTC m=+0.087253759 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:07:14 compute-0 podman[256580]: 2025-11-25 11:07:14.824186169 +0000 UTC m=+0.116210349 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Nov 25 11:07:16 compute-0 podman[256618]: 2025-11-25 11:07:16.965062004 +0000 UTC m=+0.068806895 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., config_id=edpm, container_name=kepler, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public)
Nov 25 11:07:17 compute-0 nova_compute[189381]: 2025-11-25 11:07:17.972 189385 DEBUG nova.compute.manager [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-changed-12f4cfde-a94c-4c66-a066-f073dabfcb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:17 compute-0 nova_compute[189381]: 2025-11-25 11:07:17.974 189385 DEBUG nova.compute.manager [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Refreshing instance network info cache due to event network-changed-12f4cfde-a94c-4c66-a066-f073dabfcb90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:07:17 compute-0 nova_compute[189381]: 2025-11-25 11:07:17.974 189385 DEBUG oslo_concurrency.lockutils [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:07:17 compute-0 nova_compute[189381]: 2025-11-25 11:07:17.975 189385 DEBUG oslo_concurrency.lockutils [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:07:17 compute-0 nova_compute[189381]: 2025-11-25 11:07:17.975 189385 DEBUG nova.network.neutron [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Refreshing network info cache for port 12f4cfde-a94c-4c66-a066-f073dabfcb90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:07:18 compute-0 nova_compute[189381]: 2025-11-25 11:07:18.585 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:18 compute-0 nova_compute[189381]: 2025-11-25 11:07:18.849 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:19 compute-0 nova_compute[189381]: 2025-11-25 11:07:19.844 189385 DEBUG nova.network.neutron [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Updated VIF entry in instance network info cache for port 12f4cfde-a94c-4c66-a066-f073dabfcb90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:07:19 compute-0 nova_compute[189381]: 2025-11-25 11:07:19.845 189385 DEBUG nova.network.neutron [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Updating instance_info_cache with network_info: [{"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:07:20 compute-0 nova_compute[189381]: 2025-11-25 11:07:20.358 189385 DEBUG oslo_concurrency.lockutils [req-4496a3b1-7733-4af3-a2de-058b74af1d93 req-c5bc9df1-bd03-4e2f-b3a7-b09a7485c331 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-078c0d57-6a60-4ffc-b196-332f00f1051b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:07:23 compute-0 nova_compute[189381]: 2025-11-25 11:07:23.588 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:23 compute-0 nova_compute[189381]: 2025-11-25 11:07:23.815 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068828.8144138, 74072f60-1884-462d-9a69-28925a67978d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:07:23 compute-0 nova_compute[189381]: 2025-11-25 11:07:23.816 189385 INFO nova.compute.manager [-] [instance: 74072f60-1884-462d-9a69-28925a67978d] VM Stopped (Lifecycle Event)
Nov 25 11:07:23 compute-0 nova_compute[189381]: 2025-11-25 11:07:23.834 189385 DEBUG nova.compute.manager [None req-13c8d359-fd88-4ee1-9ec1-d547831ae94c - - - - - -] [instance: 74072f60-1884-462d-9a69-28925a67978d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:07:23 compute-0 nova_compute[189381]: 2025-11-25 11:07:23.852 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:23 compute-0 ovn_controller[97779]: 2025-11-25T11:07:23Z|00182|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:07:23 compute-0 ovn_controller[97779]: 2025-11-25T11:07:23Z|00183|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:07:23 compute-0 nova_compute[189381]: 2025-11-25 11:07:23.998 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:24 compute-0 podman[256639]: 2025-11-25 11:07:24.946778447 +0000 UTC m=+0.064723067 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 25 11:07:27 compute-0 podman[256659]: 2025-11-25 11:07:27.96003248 +0000 UTC m=+0.071180404 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41)
Nov 25 11:07:27 compute-0 podman[256660]: 2025-11-25 11:07:27.979738321 +0000 UTC m=+0.088667750 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:07:28 compute-0 nova_compute[189381]: 2025-11-25 11:07:28.591 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:28 compute-0 nova_compute[189381]: 2025-11-25 11:07:28.853 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:29 compute-0 podman[203557]: time="2025-11-25T11:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:07:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Nov 25 11:07:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5261 "" "Go-http-client/1.1"
Nov 25 11:07:31 compute-0 openstack_network_exporter[205722]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:07:31 compute-0 openstack_network_exporter[205722]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:07:31 compute-0 openstack_network_exporter[205722]: ERROR   11:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:07:31 compute-0 openstack_network_exporter[205722]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:07:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:07:31 compute-0 openstack_network_exporter[205722]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:07:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:07:31 compute-0 podman[256703]: 2025-11-25 11:07:31.967444861 +0000 UTC m=+0.075059076 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 25 11:07:32 compute-0 nova_compute[189381]: 2025-11-25 11:07:32.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:32 compute-0 nova_compute[189381]: 2025-11-25 11:07:32.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:32 compute-0 podman[256702]: 2025-11-25 11:07:32.031413675 +0000 UTC m=+0.144130558 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:07:32 compute-0 ovn_controller[97779]: 2025-11-25T11:07:32Z|00184|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:07:32 compute-0 ovn_controller[97779]: 2025-11-25T11:07:32Z|00185|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:07:32 compute-0 nova_compute[189381]: 2025-11-25 11:07:32.682 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:33 compute-0 nova_compute[189381]: 2025-11-25 11:07:33.595 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:33 compute-0 nova_compute[189381]: 2025-11-25 11:07:33.856 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.055 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.056 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.056 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.057 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:07:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:36.073 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:36.073 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:36.074 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.164 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:36 compute-0 podman[256744]: 2025-11-25 11:07:36.186226377 +0000 UTC m=+0.067541968 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.229 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.231 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.304 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.316 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.380 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.382 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.448 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.456 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.522 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.524 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.586 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.993 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.995 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4886MB free_disk=72.07131576538086GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.996 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:36 compute-0 nova_compute[189381]: 2025-11-25 11:07:36.996 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.073 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.074 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.075 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 078c0d57-6a60-4ffc-b196-332f00f1051b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.075 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.076 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.171 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.209 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.320 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:07:37 compute-0 nova_compute[189381]: 2025-11-25 11:07:37.321 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:38 compute-0 nova_compute[189381]: 2025-11-25 11:07:38.600 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:38 compute-0 nova_compute[189381]: 2025-11-25 11:07:38.860 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:40 compute-0 nova_compute[189381]: 2025-11-25 11:07:40.322 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:40 compute-0 nova_compute[189381]: 2025-11-25 11:07:40.324 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:07:40 compute-0 nova_compute[189381]: 2025-11-25 11:07:40.841 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:07:40 compute-0 nova_compute[189381]: 2025-11-25 11:07:40.842 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:07:40 compute-0 nova_compute[189381]: 2025-11-25 11:07:40.842 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.067 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updating instance_info_cache with network_info: [{"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.083 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.084 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.085 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.085 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.603 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.778 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:43 compute-0 nova_compute[189381]: 2025-11-25 11:07:43.862 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:44 compute-0 podman[256786]: 2025-11-25 11:07:44.959398434 +0000 UTC m=+0.071422760 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 11:07:44 compute-0 podman[256787]: 2025-11-25 11:07:44.970130755 +0000 UTC m=+0.077164967 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Nov 25 11:07:45 compute-0 nova_compute[189381]: 2025-11-25 11:07:45.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:45 compute-0 nova_compute[189381]: 2025-11-25 11:07:45.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:07:45 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:45.973 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:07:45 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:45.974 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:07:45 compute-0 nova_compute[189381]: 2025-11-25 11:07:45.976 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:46 compute-0 nova_compute[189381]: 2025-11-25 11:07:46.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:07:47 compute-0 ovn_controller[97779]: 2025-11-25T11:07:47Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:ba:b8 10.100.0.7
Nov 25 11:07:47 compute-0 ovn_controller[97779]: 2025-11-25T11:07:47Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:ba:b8 10.100.0.7
Nov 25 11:07:47 compute-0 podman[256842]: 2025-11-25 11:07:47.965973824 +0000 UTC m=+0.075691784 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-container, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, config_id=edpm, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 11:07:48 compute-0 nova_compute[189381]: 2025-11-25 11:07:48.605 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:48 compute-0 nova_compute[189381]: 2025-11-25 11:07:48.865 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:51 compute-0 nova_compute[189381]: 2025-11-25 11:07:51.528 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:53 compute-0 nova_compute[189381]: 2025-11-25 11:07:53.607 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:53 compute-0 nova_compute[189381]: 2025-11-25 11:07:53.868 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:54 compute-0 nova_compute[189381]: 2025-11-25 11:07:54.910 189385 INFO nova.compute.manager [None req-d58e5acb-515b-4fc7-8933-b24eb645a798 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Get console output
Nov 25 11:07:54 compute-0 nova_compute[189381]: 2025-11-25 11:07:54.916 239472 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Nov 25 11:07:54 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:54.976 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.255 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.256 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.257 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.257 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.258 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.259 189385 INFO nova.compute.manager [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Terminating instance
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.262 189385 DEBUG nova.compute.manager [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:07:55 compute-0 kernel: tap12f4cfde-a9 (unregistering): left promiscuous mode
Nov 25 11:07:55 compute-0 NetworkManager[56317]: <info>  [1764068875.2987] device (tap12f4cfde-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:07:55 compute-0 ovn_controller[97779]: 2025-11-25T11:07:55Z|00186|binding|INFO|Releasing lport 12f4cfde-a94c-4c66-a066-f073dabfcb90 from this chassis (sb_readonly=0)
Nov 25 11:07:55 compute-0 ovn_controller[97779]: 2025-11-25T11:07:55Z|00187|binding|INFO|Setting lport 12f4cfde-a94c-4c66-a066-f073dabfcb90 down in Southbound
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.310 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 ovn_controller[97779]: 2025-11-25T11:07:55Z|00188|binding|INFO|Removing iface tap12f4cfde-a9 ovn-installed in OVS
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.316 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.332 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 25 11:07:55 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 38.021s CPU time.
Nov 25 11:07:55 compute-0 systemd-machined[155706]: Machine qemu-15-instance-0000000e terminated.
Nov 25 11:07:55 compute-0 ovn_controller[97779]: 2025-11-25T11:07:55Z|00189|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:07:55 compute-0 ovn_controller[97779]: 2025-11-25T11:07:55Z|00190|binding|INFO|Releasing lport 702441f8-9440-4a38-a0f0-225d972b0155 from this chassis (sb_readonly=0)
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.402 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:ba:b8 10.100.0.7'], port_security=['fa:16:3e:f4:ba:b8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '078c0d57-6a60-4ffc-b196-332f00f1051b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89069d3ee96a4fd493232b094a94877d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2a5f38a9-6d54-453a-8b93-bf4c5cd9a215', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=feead41c-bd30-4d7d-b182-8bed9968ffc7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=12f4cfde-a94c-4c66-a066-f073dabfcb90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.403 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 12f4cfde-a94c-4c66-a066-f073dabfcb90 in datapath a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 unbound from our chassis
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.405 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2
Nov 25 11:07:55 compute-0 podman[256862]: 2025-11-25 11:07:55.4159868 +0000 UTC m=+0.090857844 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.433 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3f3587-9310-4cea-9e59-769f63435dc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.455 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.475 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[695aac6e-c7dd-4fe4-9a46-0bbc172b986d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.482 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[1897a7a8-e303-4cb9-8479-49139ebec013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.495 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.505 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.532 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[b72c6fd4-74a3-408d-afaf-4a670f949742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.552 189385 INFO nova.virt.libvirt.driver [-] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Instance destroyed successfully.
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.553 189385 DEBUG nova.objects.instance [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lazy-loading 'resources' on Instance uuid 078c0d57-6a60-4ffc-b196-332f00f1051b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.566 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[56aae522-4a63-4fc9-8503-0cf3a9bf4139]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa6f834aa-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:f4:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565744, 'reachable_time': 30685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256908, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.585 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[695fca2d-0d05-45ab-8f67-9d524cfedb91]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapa6f834aa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565756, 'tstamp': 565756}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256909, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa6f834aa-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 565759, 'tstamp': 565759}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256909, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.586 189385 DEBUG nova.virt.libvirt.vif [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:06:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2121031331',display_name='tempest-TestNetworkBasicOps-server-2121031331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2121031331',id=14,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJD8tgqgcb1AhoxRl1LsWSz8grSSR7HvFac3Gue31bq68PmaTfpqqvQ1Nzp3FpFe1yJgfkctH98TQri7uN5cvNDX8sS2K+xsbRfTATbNFzm68iWSYI7bJKjqVjeVDfnbYA==',key_name='tempest-TestNetworkBasicOps-1223615685',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:07:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='89069d3ee96a4fd493232b094a94877d',ramdisk_id='',reservation_id='r-03dq3w90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-448137458',owner_user_name='tempest-TestNetworkBasicOps-448137458-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:07:12Z,user_data=None,user_id='97d307f20103434babe2431661f5bbdb',uuid=078c0d57-6a60-4ffc-b196-332f00f1051b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.586 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6f834aa-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.587 189385 DEBUG nova.network.os_vif_util [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converting VIF {"id": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "address": "fa:16:3e:f4:ba:b8", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap12f4cfde-a9", "ovs_interfaceid": "12f4cfde-a94c-4c66-a066-f073dabfcb90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.588 189385 DEBUG nova.network.os_vif_util [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:ba:b8,bridge_name='br-int',has_traffic_filtering=True,id=12f4cfde-a94c-4c66-a066-f073dabfcb90,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12f4cfde-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.588 189385 DEBUG os_vif [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:ba:b8,bridge_name='br-int',has_traffic_filtering=True,id=12f4cfde-a94c-4c66-a066-f073dabfcb90,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12f4cfde-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.590 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.591 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12f4cfde-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.592 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.593 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.596 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.596 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6f834aa-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.596 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.597 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa6f834aa-d0, col_values=(('external_ids', {'iface-id': '702441f8-9440-4a38-a0f0-225d972b0155'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:07:55 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:55.597 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.597 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.601 189385 INFO os_vif [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:ba:b8,bridge_name='br-int',has_traffic_filtering=True,id=12f4cfde-a94c-4c66-a066-f073dabfcb90,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap12f4cfde-a9')
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.602 189385 INFO nova.virt.libvirt.driver [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Deleting instance files /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b_del
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.603 189385 INFO nova.virt.libvirt.driver [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Deletion of /var/lib/nova/instances/078c0d57-6a60-4ffc-b196-332f00f1051b_del complete
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.674 189385 DEBUG nova.compute.manager [req-647e0e8b-ff3a-47a5-9f58-5241fea55f16 req-9f117e8a-838b-4283-b662-84387e4418f3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-vif-unplugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.675 189385 DEBUG oslo_concurrency.lockutils [req-647e0e8b-ff3a-47a5-9f58-5241fea55f16 req-9f117e8a-838b-4283-b662-84387e4418f3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.675 189385 DEBUG oslo_concurrency.lockutils [req-647e0e8b-ff3a-47a5-9f58-5241fea55f16 req-9f117e8a-838b-4283-b662-84387e4418f3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.676 189385 DEBUG oslo_concurrency.lockutils [req-647e0e8b-ff3a-47a5-9f58-5241fea55f16 req-9f117e8a-838b-4283-b662-84387e4418f3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.676 189385 DEBUG nova.compute.manager [req-647e0e8b-ff3a-47a5-9f58-5241fea55f16 req-9f117e8a-838b-4283-b662-84387e4418f3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] No waiting events found dispatching network-vif-unplugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.676 189385 DEBUG nova.compute.manager [req-647e0e8b-ff3a-47a5-9f58-5241fea55f16 req-9f117e8a-838b-4283-b662-84387e4418f3 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-vif-unplugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.729 189385 INFO nova.compute.manager [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Took 0.47 seconds to destroy the instance on the hypervisor.
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.729 189385 DEBUG oslo.service.loopingcall [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.730 189385 DEBUG nova.compute.manager [-] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:07:55 compute-0 nova_compute[189381]: 2025-11-25 11:07:55.730 189385 DEBUG nova.network.neutron [-] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.487 189385 DEBUG nova.network.neutron [-] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.511 189385 INFO nova.compute.manager [-] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Took 1.78 seconds to deallocate network for instance.
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.559 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.559 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.704 189385 DEBUG nova.compute.provider_tree [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.717 189385 DEBUG nova.scheduler.client.report [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.746 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.779 189385 INFO nova.scheduler.client.report [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Deleted allocations for instance 078c0d57-6a60-4ffc-b196-332f00f1051b
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.857 189385 DEBUG nova.compute.manager [req-6bc1d5c2-6dc9-467b-b4f6-5661ba209d43 req-e9c2219d-dffa-48bb-8042-5087a5aa2cff d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.858 189385 DEBUG oslo_concurrency.lockutils [req-6bc1d5c2-6dc9-467b-b4f6-5661ba209d43 req-e9c2219d-dffa-48bb-8042-5087a5aa2cff d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.858 189385 DEBUG oslo_concurrency.lockutils [req-6bc1d5c2-6dc9-467b-b4f6-5661ba209d43 req-e9c2219d-dffa-48bb-8042-5087a5aa2cff d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.859 189385 DEBUG oslo_concurrency.lockutils [req-6bc1d5c2-6dc9-467b-b4f6-5661ba209d43 req-e9c2219d-dffa-48bb-8042-5087a5aa2cff d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.859 189385 DEBUG nova.compute.manager [req-6bc1d5c2-6dc9-467b-b4f6-5661ba209d43 req-e9c2219d-dffa-48bb-8042-5087a5aa2cff d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] No waiting events found dispatching network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.859 189385 WARNING nova.compute.manager [req-6bc1d5c2-6dc9-467b-b4f6-5661ba209d43 req-e9c2219d-dffa-48bb-8042-5087a5aa2cff d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received unexpected event network-vif-plugged-12f4cfde-a94c-4c66-a066-f073dabfcb90 for instance with vm_state deleted and task_state None.
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.860 189385 DEBUG nova.compute.manager [req-6bc1d5c2-6dc9-467b-b4f6-5661ba209d43 req-e9c2219d-dffa-48bb-8042-5087a5aa2cff d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Received event network-vif-deleted-12f4cfde-a94c-4c66-a066-f073dabfcb90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:07:57 compute-0 nova_compute[189381]: 2025-11-25 11:07:57.862 189385 DEBUG oslo_concurrency.lockutils [None req-3ff072d1-7563-4dba-9c35-9a889da818d9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "078c0d57-6a60-4ffc-b196-332f00f1051b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:58 compute-0 nova_compute[189381]: 2025-11-25 11:07:58.610 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:58 compute-0 podman[256911]: 2025-11-25 11:07:58.98162617 +0000 UTC m=+0.094967153 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6)
Nov 25 11:07:58 compute-0 podman[256912]: 2025-11-25 11:07:58.982380282 +0000 UTC m=+0.085394156 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:07:59 compute-0 podman[203557]: time="2025-11-25T11:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:07:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30756 "" "Go-http-client/1.1"
Nov 25 11:07:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5268 "" "Go-http-client/1.1"
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.855 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.857 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.858 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.858 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.858 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.860 189385 INFO nova.compute.manager [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Terminating instance
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.861 189385 DEBUG nova.compute.manager [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:07:59 compute-0 kernel: tape66646b4-49 (unregistering): left promiscuous mode
Nov 25 11:07:59 compute-0 NetworkManager[56317]: <info>  [1764068879.9006] device (tape66646b4-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.904 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:59 compute-0 ovn_controller[97779]: 2025-11-25T11:07:59Z|00191|binding|INFO|Releasing lport e66646b4-49f7-478f-a2c1-e76f91c0dcb5 from this chassis (sb_readonly=0)
Nov 25 11:07:59 compute-0 ovn_controller[97779]: 2025-11-25T11:07:59Z|00192|binding|INFO|Setting lport e66646b4-49f7-478f-a2c1-e76f91c0dcb5 down in Southbound
Nov 25 11:07:59 compute-0 ovn_controller[97779]: 2025-11-25T11:07:59Z|00193|binding|INFO|Removing iface tape66646b4-49 ovn-installed in OVS
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.910 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:59 compute-0 nova_compute[189381]: 2025-11-25 11:07:59.925 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:07:59 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 25 11:07:59 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 46.760s CPU time.
Nov 25 11:07:59 compute-0 systemd-machined[155706]: Machine qemu-13-instance-0000000c terminated.
Nov 25 11:07:59 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:59.967 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:ce:5c 10.100.0.5'], port_security=['fa:16:3e:05:ce:5c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '89069d3ee96a4fd493232b094a94877d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e62d1308-edba-4797-954c-6555434a8671', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=feead41c-bd30-4d7d-b182-8bed9968ffc7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=e66646b4-49f7-478f-a2c1-e76f91c0dcb5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:07:59 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:59.969 106634 INFO neutron.agent.ovn.metadata.agent [-] Port e66646b4-49f7-478f-a2c1-e76f91c0dcb5 in datapath a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 unbound from our chassis
Nov 25 11:07:59 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:59.970 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:07:59 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:59.971 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c611d1-c1ac-4fac-8242-bc440702499b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:07:59 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:07:59.973 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 namespace which is not needed anymore
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.091 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.112 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.139 189385 INFO nova.virt.libvirt.driver [-] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Instance destroyed successfully.
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.140 189385 DEBUG nova.objects.instance [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lazy-loading 'resources' on Instance uuid b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:08:00 compute-0 neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2[255739]: [NOTICE]   (255743) : haproxy version is 2.8.14-c23fe91
Nov 25 11:08:00 compute-0 neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2[255739]: [NOTICE]   (255743) : path to executable is /usr/sbin/haproxy
Nov 25 11:08:00 compute-0 neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2[255739]: [WARNING]  (255743) : Exiting Master process...
Nov 25 11:08:00 compute-0 neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2[255739]: [ALERT]    (255743) : Current worker (255745) exited with code 143 (Terminated)
Nov 25 11:08:00 compute-0 neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2[255739]: [WARNING]  (255743) : All workers exited. Exiting... (0)
Nov 25 11:08:00 compute-0 systemd[1]: libpod-691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523.scope: Deactivated successfully.
Nov 25 11:08:00 compute-0 podman[256977]: 2025-11-25 11:08:00.157106151 +0000 UTC m=+0.065152039 container died 691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.161 189385 DEBUG nova.virt.libvirt.vif [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:05:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-401290240',display_name='tempest-TestNetworkBasicOps-server-401290240',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-401290240',id=12,image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHA8R4q6qPFU+ALdVzgKo4U9D54rhiMYhyFh1DfoGFij9UC3wSOk8pBEA8MgYqf5zaKmFTI58V1qGOYP7Zgp5d4I8du77yh6rO6+SF28X0uZmieYLZNtgoLf/lManZdFug==',key_name='tempest-TestNetworkBasicOps-1314646098',keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:06:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='89069d3ee96a4fd493232b094a94877d',ramdisk_id='',reservation_id='r-gnoseubk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='b388f0fb-bd04-4296-928b-44c706e0493e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-448137458',owner_user_name='tempest-TestNetworkBasicOps-448137458-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:06:02Z,user_data=None,user_id='97d307f20103434babe2431661f5bbdb',uuid=b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.162 189385 DEBUG nova.network.os_vif_util [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converting VIF {"id": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "address": "fa:16:3e:05:ce:5c", "network": {"id": "a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2", "bridge": "br-int", "label": "tempest-network-smoke--1505779129", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "89069d3ee96a4fd493232b094a94877d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape66646b4-49", "ovs_interfaceid": "e66646b4-49f7-478f-a2c1-e76f91c0dcb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.164 189385 DEBUG nova.network.os_vif_util [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:ce:5c,bridge_name='br-int',has_traffic_filtering=True,id=e66646b4-49f7-478f-a2c1-e76f91c0dcb5,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66646b4-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.164 189385 DEBUG os_vif [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ce:5c,bridge_name='br-int',has_traffic_filtering=True,id=e66646b4-49f7-478f-a2c1-e76f91c0dcb5,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66646b4-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.165 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.166 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape66646b4-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.167 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.169 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.172 189385 INFO os_vif [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:ce:5c,bridge_name='br-int',has_traffic_filtering=True,id=e66646b4-49f7-478f-a2c1-e76f91c0dcb5,network=Network(a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape66646b4-49')
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.172 189385 INFO nova.virt.libvirt.driver [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Deleting instance files /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f_del
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.173 189385 INFO nova.virt.libvirt.driver [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Deletion of /var/lib/nova/instances/b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f_del complete
Nov 25 11:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523-userdata-shm.mount: Deactivated successfully.
Nov 25 11:08:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-1370c934ebef9537e2b36ccfe10491be88aa1b8b19308df9823b5d7adfeb7cb5-merged.mount: Deactivated successfully.
Nov 25 11:08:00 compute-0 podman[256977]: 2025-11-25 11:08:00.203479055 +0000 UTC m=+0.111524923 container cleanup 691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:08:00 compute-0 systemd[1]: libpod-conmon-691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523.scope: Deactivated successfully.
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.266 189385 INFO nova.compute.manager [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Took 0.40 seconds to destroy the instance on the hypervisor.
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.266 189385 DEBUG oslo.service.loopingcall [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.267 189385 DEBUG nova.compute.manager [-] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.267 189385 DEBUG nova.network.neutron [-] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:08:00 compute-0 podman[257017]: 2025-11-25 11:08:00.289378424 +0000 UTC m=+0.053765499 container remove 691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.299 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[0b117435-21fc-4317-8782-4d77690fd1fe]: (4, ('Tue Nov 25 11:08:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 (691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523)\n691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523\nTue Nov 25 11:08:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 (691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523)\n691b31ddc922612ab07b11811142181fde04a7746e214d9b2ae01d28d4c75523\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.301 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[3a92f3fb-7b2e-4cd8-bb50-3b85bee4e226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.302 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6f834aa-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.304 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 kernel: tapa6f834aa-d0: left promiscuous mode
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.310 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.320 189385 DEBUG nova.compute.manager [req-8f9702b6-cdde-47cc-a778-27f78e5ed4fe req-030fcea9-2778-4058-bfb7-e6f99cd339fa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-vif-unplugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.320 189385 DEBUG oslo_concurrency.lockutils [req-8f9702b6-cdde-47cc-a778-27f78e5ed4fe req-030fcea9-2778-4058-bfb7-e6f99cd339fa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.321 189385 DEBUG oslo_concurrency.lockutils [req-8f9702b6-cdde-47cc-a778-27f78e5ed4fe req-030fcea9-2778-4058-bfb7-e6f99cd339fa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.321 189385 DEBUG oslo_concurrency.lockutils [req-8f9702b6-cdde-47cc-a778-27f78e5ed4fe req-030fcea9-2778-4058-bfb7-e6f99cd339fa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.322 189385 DEBUG nova.compute.manager [req-8f9702b6-cdde-47cc-a778-27f78e5ed4fe req-030fcea9-2778-4058-bfb7-e6f99cd339fa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] No waiting events found dispatching network-vif-unplugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.322 189385 DEBUG nova.compute.manager [req-8f9702b6-cdde-47cc-a778-27f78e5ed4fe req-030fcea9-2778-4058-bfb7-e6f99cd339fa d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-vif-unplugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.324 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[19797ccb-a11b-41f4-a436-ca811abcc670]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:00 compute-0 nova_compute[189381]: 2025-11-25 11:08:00.343 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.364 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[715b96ab-7fd6-44b2-9c1d-cf8e94877989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.366 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7a2f18-88bd-4eb8-a3bf-d22577b8d6a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.380 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[6dee33fc-5f09-4254-af75-d18621f595b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 565735, 'reachable_time': 19723, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257030, 'error': None, 'target': 'ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.382 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a6f834aa-d0fe-4b8b-ac0c-79f6dcda1eb2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:08:00 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:00.383 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[d9748c48-c7f1-4120-9664-92eae4119a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:00 compute-0 systemd[1]: run-netns-ovnmeta\x2da6f834aa\x2dd0fe\x2d4b8b\x2dac0c\x2d79f6dcda1eb2.mount: Deactivated successfully.
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.185 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.306 189385 DEBUG nova.network.neutron [-] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.325 189385 INFO nova.compute.manager [-] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Took 1.06 seconds to deallocate network for instance.
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.411 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.412 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:01 compute-0 openstack_network_exporter[205722]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:08:01 compute-0 openstack_network_exporter[205722]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:08:01 compute-0 openstack_network_exporter[205722]: ERROR   11:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:08:01 compute-0 openstack_network_exporter[205722]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:08:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:08:01 compute-0 openstack_network_exporter[205722]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:08:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.492 189385 DEBUG nova.compute.manager [req-16ba53ff-74f7-4f9a-9a0f-65d368bfb37a req-d5e296ae-28e4-4294-8995-109fe19cca2a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-vif-deleted-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.512 189385 DEBUG nova.compute.provider_tree [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.532 189385 DEBUG nova.scheduler.client.report [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.558 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.587 189385 INFO nova.scheduler.client.report [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Deleted allocations for instance b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f
Nov 25 11:08:01 compute-0 nova_compute[189381]: 2025-11-25 11:08:01.702 189385 DEBUG oslo_concurrency.lockutils [None req-591d1bff-e581-4d62-a359-94d3ebd0f8b9 97d307f20103434babe2431661f5bbdb 89069d3ee96a4fd493232b094a94877d - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:02 compute-0 nova_compute[189381]: 2025-11-25 11:08:02.417 189385 DEBUG nova.compute.manager [req-cad070dd-db45-4fee-9471-14bc96e1d6d1 req-39142a11-91e3-481a-8ae1-a84e2a25e1a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received event network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:08:02 compute-0 nova_compute[189381]: 2025-11-25 11:08:02.418 189385 DEBUG oslo_concurrency.lockutils [req-cad070dd-db45-4fee-9471-14bc96e1d6d1 req-39142a11-91e3-481a-8ae1-a84e2a25e1a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:02 compute-0 nova_compute[189381]: 2025-11-25 11:08:02.418 189385 DEBUG oslo_concurrency.lockutils [req-cad070dd-db45-4fee-9471-14bc96e1d6d1 req-39142a11-91e3-481a-8ae1-a84e2a25e1a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:02 compute-0 nova_compute[189381]: 2025-11-25 11:08:02.419 189385 DEBUG oslo_concurrency.lockutils [req-cad070dd-db45-4fee-9471-14bc96e1d6d1 req-39142a11-91e3-481a-8ae1-a84e2a25e1a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:02 compute-0 nova_compute[189381]: 2025-11-25 11:08:02.419 189385 DEBUG nova.compute.manager [req-cad070dd-db45-4fee-9471-14bc96e1d6d1 req-39142a11-91e3-481a-8ae1-a84e2a25e1a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] No waiting events found dispatching network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:08:02 compute-0 nova_compute[189381]: 2025-11-25 11:08:02.419 189385 WARNING nova.compute.manager [req-cad070dd-db45-4fee-9471-14bc96e1d6d1 req-39142a11-91e3-481a-8ae1-a84e2a25e1a4 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Received unexpected event network-vif-plugged-e66646b4-49f7-478f-a2c1-e76f91c0dcb5 for instance with vm_state deleted and task_state None.
Nov 25 11:08:02 compute-0 podman[257032]: 2025-11-25 11:08:02.974667985 +0000 UTC m=+0.090231776 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:08:03 compute-0 podman[257031]: 2025-11-25 11:08:03.004117168 +0000 UTC m=+0.120781291 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:08:03 compute-0 nova_compute[189381]: 2025-11-25 11:08:03.614 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:05 compute-0 nova_compute[189381]: 2025-11-25 11:08:05.170 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:06 compute-0 podman[257075]: 2025-11-25 11:08:06.937987787 +0000 UTC m=+0.056750635 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:08:08 compute-0 nova_compute[189381]: 2025-11-25 11:08:08.617 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:09 compute-0 nova_compute[189381]: 2025-11-25 11:08:09.282 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:10 compute-0 nova_compute[189381]: 2025-11-25 11:08:10.172 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:10 compute-0 nova_compute[189381]: 2025-11-25 11:08:10.550 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068875.5484698, 078c0d57-6a60-4ffc-b196-332f00f1051b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:08:10 compute-0 nova_compute[189381]: 2025-11-25 11:08:10.551 189385 INFO nova.compute.manager [-] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] VM Stopped (Lifecycle Event)
Nov 25 11:08:10 compute-0 nova_compute[189381]: 2025-11-25 11:08:10.596 189385 DEBUG nova.compute.manager [None req-e0699708-a1af-453e-8f77-87a8af96411a - - - - - -] [instance: 078c0d57-6a60-4ffc-b196-332f00f1051b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:08:11 compute-0 ovn_controller[97779]: 2025-11-25T11:08:11Z|00194|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:08:11 compute-0 nova_compute[189381]: 2025-11-25 11:08:11.970 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:12 compute-0 ovn_controller[97779]: 2025-11-25T11:08:12Z|00195|binding|INFO|Releasing lport 915e80eb-5def-4cf6-b65e-79eab93b7232 from this chassis (sb_readonly=0)
Nov 25 11:08:12 compute-0 nova_compute[189381]: 2025-11-25 11:08:12.267 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:13 compute-0 nova_compute[189381]: 2025-11-25 11:08:13.619 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:15 compute-0 nova_compute[189381]: 2025-11-25 11:08:15.136 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764068880.135093, b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:08:15 compute-0 nova_compute[189381]: 2025-11-25 11:08:15.138 189385 INFO nova.compute.manager [-] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] VM Stopped (Lifecycle Event)
Nov 25 11:08:15 compute-0 nova_compute[189381]: 2025-11-25 11:08:15.159 189385 DEBUG nova.compute.manager [None req-741fef1b-bb2a-42ca-a3ef-16ddbc8012bd - - - - - -] [instance: b2d67fe2-b96f-4ff6-ae4b-46caaaa8d25f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:08:15 compute-0 nova_compute[189381]: 2025-11-25 11:08:15.177 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:15 compute-0 podman[257099]: 2025-11-25 11:08:15.961165728 +0000 UTC m=+0.071209494 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:08:15 compute-0 podman[257098]: 2025-11-25 11:08:15.963031602 +0000 UTC m=+0.074377766 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 25 11:08:18 compute-0 nova_compute[189381]: 2025-11-25 11:08:18.621 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:18 compute-0 podman[257138]: 2025-11-25 11:08:18.963200537 +0000 UTC m=+0.072208664 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, vcs-type=git, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible)
Nov 25 11:08:20 compute-0 nova_compute[189381]: 2025-11-25 11:08:20.180 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:23 compute-0 nova_compute[189381]: 2025-11-25 11:08:23.624 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.498 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.499 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.518 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.618 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.619 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.633 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.634 189385 INFO nova.compute.claims [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Claim successful on node compute-0.ctlplane.example.com
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.813 189385 DEBUG nova.compute.provider_tree [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.837 189385 DEBUG nova.scheduler.client.report [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.926 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.927 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.983 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 25 11:08:24 compute-0 nova_compute[189381]: 2025-11-25 11:08:24.985 189385 DEBUG nova.network.neutron [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.052 189385 INFO nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.098 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.183 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.283 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.284 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.285 189385 INFO nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Creating image(s)
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.286 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "/var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.286 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "/var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.287 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "/var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.305 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.369 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.371 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.372 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.384 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.444 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.446 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe,backing_fmt=raw /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.499 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe,backing_fmt=raw /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.501 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "3fef73d7277cb1405047adb7eff0e99ae990dcbe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.502 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.563 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3fef73d7277cb1405047adb7eff0e99ae990dcbe --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.564 189385 DEBUG nova.virt.disk.api [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Checking if we can resize image /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.565 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.624 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.634 189385 DEBUG nova.virt.disk.api [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Cannot resize image /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.636 189385 DEBUG nova.objects.instance [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lazy-loading 'migration_context' on Instance uuid dba9274f-6164-41cc-8f4b-870c1cb3f67c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.651 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.652 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Ensure instance console log exists: /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.652 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.653 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:25 compute-0 nova_compute[189381]: 2025-11-25 11:08:25.653 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:25 compute-0 podman[257174]: 2025-11-25 11:08:25.980869485 +0000 UTC m=+0.094179720 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:08:26 compute-0 nova_compute[189381]: 2025-11-25 11:08:26.088 189385 DEBUG nova.policy [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 25 11:08:28 compute-0 nova_compute[189381]: 2025-11-25 11:08:28.278 189385 DEBUG nova.network.neutron [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Successfully created port: 00b30981-5989-421b-9886-4a0d1020874c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 25 11:08:28 compute-0 nova_compute[189381]: 2025-11-25 11:08:28.627 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:29 compute-0 podman[203557]: time="2025-11-25T11:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:08:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:08:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 25 11:08:29 compute-0 podman[257193]: 2025-11-25 11:08:29.97325362 +0000 UTC m=+0.082394818 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 25 11:08:29 compute-0 podman[257194]: 2025-11-25 11:08:29.9856789 +0000 UTC m=+0.095239930 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 11:08:30 compute-0 nova_compute[189381]: 2025-11-25 11:08:30.187 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.243 189385 DEBUG nova.network.neutron [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Successfully updated port: 00b30981-5989-421b-9886-4a0d1020874c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.264 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.264 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquired lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.264 189385 DEBUG nova.network.neutron [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.401 189385 DEBUG nova.compute.manager [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received event network-changed-00b30981-5989-421b-9886-4a0d1020874c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.402 189385 DEBUG nova.compute.manager [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Refreshing instance network info cache due to event network-changed-00b30981-5989-421b-9886-4a0d1020874c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.402 189385 DEBUG oslo_concurrency.lockutils [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:08:31 compute-0 openstack_network_exporter[205722]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:08:31 compute-0 openstack_network_exporter[205722]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:08:31 compute-0 openstack_network_exporter[205722]: ERROR   11:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:08:31 compute-0 openstack_network_exporter[205722]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:08:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:08:31 compute-0 openstack_network_exporter[205722]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:08:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:08:31 compute-0 nova_compute[189381]: 2025-11-25 11:08:31.546 189385 DEBUG nova.network.neutron [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 25 11:08:32 compute-0 nova_compute[189381]: 2025-11-25 11:08:32.024 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:33 compute-0 nova_compute[189381]: 2025-11-25 11:08:33.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:33 compute-0 nova_compute[189381]: 2025-11-25 11:08:33.629 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:33 compute-0 podman[257237]: 2025-11-25 11:08:33.956386638 +0000 UTC m=+0.070150664 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 25 11:08:34 compute-0 podman[257236]: 2025-11-25 11:08:34.02271934 +0000 UTC m=+0.140630936 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.498 189385 DEBUG nova.network.neutron [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updating instance_info_cache with network_info: [{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.518 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Releasing lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.519 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Instance network_info: |[{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.520 189385 DEBUG oslo_concurrency.lockutils [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquired lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.520 189385 DEBUG nova.network.neutron [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Refreshing network info cache for port 00b30981-5989-421b-9886-4a0d1020874c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.525 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Start _get_guest_xml network_info=[{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T11:04:01Z,direct_url=<?>,disk_format='qcow2',id=62ab6b08-ec10-4838-aa81-24150af36537,min_disk=0,min_ram=0,name='tempest-scenario-img--502157881',owner='d057fe4d034a4f13b6e08dc8083cad5b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T11:04:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'guest_format': None, 'device_name': '/dev/vda', 'encryption_format': None, 'disk_bus': 'virtio', 'size': 0, 'device_type': 'disk', 'encrypted': False, 'boot_index': 0, 'encryption_options': None, 'image_id': '62ab6b08-ec10-4838-aa81-24150af36537'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.536 189385 WARNING nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.546 189385 DEBUG nova.virt.libvirt.host [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.547 189385 DEBUG nova.virt.libvirt.host [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.552 189385 DEBUG nova.virt.libvirt.host [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.553 189385 DEBUG nova.virt.libvirt.host [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.553 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.554 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-25T10:59:12Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b7c0626e-febc-4083-b621-6f5ee0740a18',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-25T11:04:01Z,direct_url=<?>,disk_format='qcow2',id=62ab6b08-ec10-4838-aa81-24150af36537,min_disk=0,min_ram=0,name='tempest-scenario-img--502157881',owner='d057fe4d034a4f13b6e08dc8083cad5b',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-25T11:04:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.555 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.555 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.556 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.556 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.556 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.557 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.557 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.558 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.558 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.559 189385 DEBUG nova.virt.hardware [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.562 189385 DEBUG nova.virt.libvirt.vif [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx',id=15,image_ref='62ab6b08-ec10-4838-aa81-24150af36537',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f33016ec-000f-44cf-b7cc-2122723ba143'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d057fe4d034a4f13b6e08dc8083cad5b',ramdisk_id='',reservation_id='r-fc0lq6tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='62ab6b08-ec10-4838-aa81-24150af36537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1327093183',owner_user_name='tempest-PrometheusGabbiTest-1327093183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:08:25Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='95acdf386c1e42c8a6da1f7b9603054f',uuid=dba9274f-6164-41cc-8f4b-870c1cb3f67c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.563 189385 DEBUG nova.network.os_vif_util [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converting VIF {"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.564 189385 DEBUG nova.network.os_vif_util [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2c:2e,bridge_name='br-int',has_traffic_filtering=True,id=00b30981-5989-421b-9886-4a0d1020874c,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b30981-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.565 189385 DEBUG nova.objects.instance [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lazy-loading 'pci_devices' on Instance uuid dba9274f-6164-41cc-8f4b-870c1cb3f67c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.576 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] End _get_guest_xml xml=<domain type="kvm">
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <uuid>dba9274f-6164-41cc-8f4b-870c1cb3f67c</uuid>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <name>instance-0000000f</name>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <memory>131072</memory>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <vcpu>1</vcpu>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <metadata>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <nova:name>te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx</nova:name>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <nova:creationTime>2025-11-25 11:08:34</nova:creationTime>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <nova:flavor name="m1.nano">
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:memory>128</nova:memory>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:disk>1</nova:disk>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:swap>0</nova:swap>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:ephemeral>0</nova:ephemeral>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:vcpus>1</nova:vcpus>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       </nova:flavor>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <nova:owner>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:user uuid="95acdf386c1e42c8a6da1f7b9603054f">tempest-PrometheusGabbiTest-1327093183-project-member</nova:user>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:project uuid="d057fe4d034a4f13b6e08dc8083cad5b">tempest-PrometheusGabbiTest-1327093183</nova:project>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       </nova:owner>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <nova:root type="image" uuid="62ab6b08-ec10-4838-aa81-24150af36537"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <nova:ports>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         <nova:port uuid="00b30981-5989-421b-9886-4a0d1020874c">
Nov 25 11:08:34 compute-0 nova_compute[189381]:           <nova:ip type="fixed" address="10.100.0.181" ipVersion="4"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:         </nova:port>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       </nova:ports>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </nova:instance>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   </metadata>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <sysinfo type="smbios">
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <system>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <entry name="manufacturer">RDO</entry>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <entry name="product">OpenStack Compute</entry>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <entry name="serial">dba9274f-6164-41cc-8f4b-870c1cb3f67c</entry>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <entry name="uuid">dba9274f-6164-41cc-8f4b-870c1cb3f67c</entry>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <entry name="family">Virtual Machine</entry>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </system>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   </sysinfo>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <os>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <boot dev="hd"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <smbios mode="sysinfo"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   </os>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <features>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <acpi/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <apic/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <vmcoreinfo/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   </features>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <clock offset="utc">
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <timer name="pit" tickpolicy="delay"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <timer name="hpet" present="no"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   </clock>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <cpu mode="host-model" match="exact">
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <topology sockets="1" cores="1" threads="1"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   </cpu>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   <devices>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <disk type="file" device="disk">
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <driver name="qemu" type="qcow2" cache="none"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <target dev="vda" bus="virtio"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <disk type="file" device="cdrom">
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <driver name="qemu" type="raw" cache="none"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <source file="/var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.config"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <target dev="sda" bus="sata"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </disk>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <interface type="ethernet">
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <mac address="fa:16:3e:93:2c:2e"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <driver name="vhost" rx_queue_size="512"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <mtu size="1442"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <target dev="tap00b30981-59"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </interface>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <serial type="pty">
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <log file="/var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/console.log" append="off"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </serial>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <video>
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <model type="virtio"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </video>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <input type="tablet" bus="usb"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <rng model="virtio">
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <backend model="random">/dev/urandom</backend>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </rng>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="pci" model="pcie-root-port"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <controller type="usb" index="0"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     <memballoon model="virtio">
Nov 25 11:08:34 compute-0 nova_compute[189381]:       <stats period="10"/>
Nov 25 11:08:34 compute-0 nova_compute[189381]:     </memballoon>
Nov 25 11:08:34 compute-0 nova_compute[189381]:   </devices>
Nov 25 11:08:34 compute-0 nova_compute[189381]: </domain>
Nov 25 11:08:34 compute-0 nova_compute[189381]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.585 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Preparing to wait for external event network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.585 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.586 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.586 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.587 189385 DEBUG nova.virt.libvirt.vif [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-25T11:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx',id=15,image_ref='62ab6b08-ec10-4838-aa81-24150af36537',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f33016ec-000f-44cf-b7cc-2122723ba143'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d057fe4d034a4f13b6e08dc8083cad5b',ramdisk_id='',reservation_id='r-fc0lq6tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='62ab6b08-ec10-4838-aa81-24150af36537',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1327093183',owner_user_name='tempest-PrometheusGabbiTest-1327093183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-25T11:08:25Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='95acdf386c1e42c8a6da1f7b9603054f',uuid=dba9274f-6164-41cc-8f4b-870c1cb3f67c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.588 189385 DEBUG nova.network.os_vif_util [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converting VIF {"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.589 189385 DEBUG nova.network.os_vif_util [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:93:2c:2e,bridge_name='br-int',has_traffic_filtering=True,id=00b30981-5989-421b-9886-4a0d1020874c,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b30981-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.590 189385 DEBUG os_vif [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2c:2e,bridge_name='br-int',has_traffic_filtering=True,id=00b30981-5989-421b-9886-4a0d1020874c,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b30981-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.591 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.591 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.592 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.595 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.595 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00b30981-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.596 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00b30981-59, col_values=(('external_ids', {'iface-id': '00b30981-5989-421b-9886-4a0d1020874c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:93:2c:2e', 'vm-uuid': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.598 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:34 compute-0 NetworkManager[56317]: <info>  [1764068914.5993] manager: (tap00b30981-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.601 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.606 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.607 189385 INFO os_vif [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:93:2c:2e,bridge_name='br-int',has_traffic_filtering=True,id=00b30981-5989-421b-9886-4a0d1020874c,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b30981-59')
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.701 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.702 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.703 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] No VIF found with MAC fa:16:3e:93:2c:2e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 25 11:08:34 compute-0 nova_compute[189381]: 2025-11-25 11:08:34.703 189385 INFO nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Using config drive
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.087 189385 INFO nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Creating config drive at /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.config
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.093 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv859xlvh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.219 189385 DEBUG oslo_concurrency.processutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv859xlvh" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:35 compute-0 kernel: tap00b30981-59: entered promiscuous mode
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.296 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.303 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:35 compute-0 ovn_controller[97779]: 2025-11-25T11:08:35Z|00196|binding|INFO|Claiming lport 00b30981-5989-421b-9886-4a0d1020874c for this chassis.
Nov 25 11:08:35 compute-0 ovn_controller[97779]: 2025-11-25T11:08:35Z|00197|binding|INFO|00b30981-5989-421b-9886-4a0d1020874c: Claiming fa:16:3e:93:2c:2e 10.100.0.181
Nov 25 11:08:35 compute-0 NetworkManager[56317]: <info>  [1764068915.3072] manager: (tap00b30981-59): new Tun device (/org/freedesktop/NetworkManager/Devices/79)
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.320 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:35 compute-0 ovn_controller[97779]: 2025-11-25T11:08:35Z|00198|binding|INFO|Setting lport 00b30981-5989-421b-9886-4a0d1020874c ovn-installed in OVS
Nov 25 11:08:35 compute-0 ovn_controller[97779]: 2025-11-25T11:08:35Z|00199|binding|INFO|Setting lport 00b30981-5989-421b-9886-4a0d1020874c up in Southbound
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.324 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2c:2e 10.100.0.181'], port_security=['fa:16:3e:93:2c:2e 10.100.0.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.181/16', 'neutron:device_id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6dd922d1-432e-41c0-9438-975e4d0bc760', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da371dea-a01c-4170-8065-7d1b11a4ac95, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=00b30981-5989-421b-9886-4a0d1020874c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.326 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 00b30981-5989-421b-9886-4a0d1020874c in datapath a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 bound to our chassis
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.328 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.329 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.347 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c2388d-ad79-41a4-9625-80f2650bfd03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:35 compute-0 systemd-udevd[257298]: Network interface NamePolicy= disabled on kernel command line.
Nov 25 11:08:35 compute-0 systemd-machined[155706]: New machine qemu-16-instance-0000000f.
Nov 25 11:08:35 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 25 11:08:35 compute-0 NetworkManager[56317]: <info>  [1764068915.3707] device (tap00b30981-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 25 11:08:35 compute-0 NetworkManager[56317]: <info>  [1764068915.3716] device (tap00b30981-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.379 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[3f16b4b3-bd8f-46bb-a0e7-ed73ebbe5513]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.383 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e26431-c9eb-411a-8e30-731b56b38dc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.407 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3d8b6b-e5f5-4d1f-92bf-e4fa775dc876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.424 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[ba95d0d8-6aac-4c53-a26b-694f72970d77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa82a38fb-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:c9:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559003, 'reachable_time': 43940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257305, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.440 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[4d94e487-0ee3-4097-80f6-fa4a362efc34]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa82a38fb-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559013, 'tstamp': 559013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257310, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa82a38fb-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559017, 'tstamp': 559017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257310, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.442 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa82a38fb-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.444 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.445 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.448 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa82a38fb-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.449 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.449 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa82a38fb-80, col_values=(('external_ids', {'iface-id': '915e80eb-5def-4cf6-b65e-79eab93b7232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:35 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:35.450 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.640 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068915.6394024, dba9274f-6164-41cc-8f4b-870c1cb3f67c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.641 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] VM Started (Lifecycle Event)
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.676 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.682 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068915.6407962, dba9274f-6164-41cc-8f4b-870c1cb3f67c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.683 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] VM Paused (Lifecycle Event)
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.704 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.710 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:08:35 compute-0 nova_compute[189381]: 2025-11-25 11:08:35.725 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:08:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:36.073 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:36.075 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:36.077 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.196 189385 DEBUG nova.compute.manager [req-695d82fd-bf95-4de9-84cf-cbc8230f1543 req-ee3f881d-fdf7-4958-aa9f-b23de23198da d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received event network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.197 189385 DEBUG oslo_concurrency.lockutils [req-695d82fd-bf95-4de9-84cf-cbc8230f1543 req-ee3f881d-fdf7-4958-aa9f-b23de23198da d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.198 189385 DEBUG oslo_concurrency.lockutils [req-695d82fd-bf95-4de9-84cf-cbc8230f1543 req-ee3f881d-fdf7-4958-aa9f-b23de23198da d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.198 189385 DEBUG oslo_concurrency.lockutils [req-695d82fd-bf95-4de9-84cf-cbc8230f1543 req-ee3f881d-fdf7-4958-aa9f-b23de23198da d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.199 189385 DEBUG nova.compute.manager [req-695d82fd-bf95-4de9-84cf-cbc8230f1543 req-ee3f881d-fdf7-4958-aa9f-b23de23198da d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Processing event network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.200 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.221 189385 DEBUG nova.virt.driver [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] Emitting event <LifecycleEvent: 1764068916.2039857, dba9274f-6164-41cc-8f4b-870c1cb3f67c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.221 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] VM Resumed (Lifecycle Event)
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.223 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.229 189385 INFO nova.virt.libvirt.driver [-] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Instance spawned successfully.
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.229 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.242 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.250 189385 DEBUG nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.254 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.254 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.255 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.255 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.256 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.256 189385 DEBUG nova.virt.libvirt.driver [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.285 189385 INFO nova.compute.manager [None req-63f9efab-c9b3-4e20-a8af-68b88cbc3600 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.328 189385 INFO nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Took 11.04 seconds to spawn the instance on the hypervisor.
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.328 189385 DEBUG nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.396 189385 INFO nova.compute.manager [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Took 11.81 seconds to build instance.
Nov 25 11:08:36 compute-0 nova_compute[189381]: 2025-11-25 11:08:36.412 189385 DEBUG oslo_concurrency.lockutils [None req-ba5ee865-8d6b-4ab7-afc6-79c07af8a8fd 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.913s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:37 compute-0 nova_compute[189381]: 2025-11-25 11:08:37.126 189385 DEBUG nova.network.neutron [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updated VIF entry in instance network info cache for port 00b30981-5989-421b-9886-4a0d1020874c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 25 11:08:37 compute-0 nova_compute[189381]: 2025-11-25 11:08:37.126 189385 DEBUG nova.network.neutron [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updating instance_info_cache with network_info: [{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:08:37 compute-0 nova_compute[189381]: 2025-11-25 11:08:37.142 189385 DEBUG oslo_concurrency.lockutils [req-a4c936b7-f97b-478b-adaa-bdc21f4923e0 req-be68886a-28f7-444a-87a0-2d7b9d0521e7 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Releasing lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:08:37 compute-0 podman[257320]: 2025-11-25 11:08:37.959992049 +0000 UTC m=+0.068143346 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.047 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.048 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.134 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.199 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.200 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.269 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.281 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.304 189385 DEBUG nova.compute.manager [req-41f0ac2a-2261-4792-a096-a36cc29d5409 req-8d71d8d4-ef9c-48a1-88c3-96e048aeaea0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received event network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.305 189385 DEBUG oslo_concurrency.lockutils [req-41f0ac2a-2261-4792-a096-a36cc29d5409 req-8d71d8d4-ef9c-48a1-88c3-96e048aeaea0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.306 189385 DEBUG oslo_concurrency.lockutils [req-41f0ac2a-2261-4792-a096-a36cc29d5409 req-8d71d8d4-ef9c-48a1-88c3-96e048aeaea0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.306 189385 DEBUG oslo_concurrency.lockutils [req-41f0ac2a-2261-4792-a096-a36cc29d5409 req-8d71d8d4-ef9c-48a1-88c3-96e048aeaea0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.306 189385 DEBUG nova.compute.manager [req-41f0ac2a-2261-4792-a096-a36cc29d5409 req-8d71d8d4-ef9c-48a1-88c3-96e048aeaea0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] No waiting events found dispatching network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.307 189385 WARNING nova.compute.manager [req-41f0ac2a-2261-4792-a096-a36cc29d5409 req-8d71d8d4-ef9c-48a1-88c3-96e048aeaea0 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received unexpected event network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c for instance with vm_state active and task_state None.
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.346 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.347 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.408 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.632 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.785 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.787 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5156MB free_disk=72.09992218017578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.788 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.788 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.888 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.889 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.889 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:08:38 compute-0 nova_compute[189381]: 2025-11-25 11:08:38.889 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.025 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.052 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.052 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.072 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.094 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.156 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.171 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.234 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.234 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:08:39 compute-0 nova_compute[189381]: 2025-11-25 11:08:39.599 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:40 compute-0 nova_compute[189381]: 2025-11-25 11:08:40.235 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:40 compute-0 nova_compute[189381]: 2025-11-25 11:08:40.236 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:08:40 compute-0 nova_compute[189381]: 2025-11-25 11:08:40.258 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 11:08:41 compute-0 nova_compute[189381]: 2025-11-25 11:08:41.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:42 compute-0 nova_compute[189381]: 2025-11-25 11:08:42.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:43 compute-0 nova_compute[189381]: 2025-11-25 11:08:43.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:43 compute-0 nova_compute[189381]: 2025-11-25 11:08:43.635 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:44 compute-0 nova_compute[189381]: 2025-11-25 11:08:44.602 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:46 compute-0 nova_compute[189381]: 2025-11-25 11:08:46.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:46 compute-0 nova_compute[189381]: 2025-11-25 11:08:46.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:08:46 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:46.553 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:08:46 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:46.554 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:08:46 compute-0 nova_compute[189381]: 2025-11-25 11:08:46.556 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:46 compute-0 podman[257358]: 2025-11-25 11:08:46.963095907 +0000 UTC m=+0.073680786 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 25 11:08:46 compute-0 podman[257359]: 2025-11-25 11:08:46.966950329 +0000 UTC m=+0.074029636 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 25 11:08:47 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:08:47.558 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:08:48 compute-0 nova_compute[189381]: 2025-11-25 11:08:48.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:08:48 compute-0 nova_compute[189381]: 2025-11-25 11:08:48.637 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:49 compute-0 nova_compute[189381]: 2025-11-25 11:08:49.606 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:49 compute-0 podman[257397]: 2025-11-25 11:08:49.954819757 +0000 UTC m=+0.072833051 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 25 11:08:53 compute-0 nova_compute[189381]: 2025-11-25 11:08:53.640 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:54 compute-0 nova_compute[189381]: 2025-11-25 11:08:54.609 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:56 compute-0 podman[257417]: 2025-11-25 11:08:56.966122341 +0000 UTC m=+0.077464706 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:08:58 compute-0 nova_compute[189381]: 2025-11-25 11:08:58.640 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:59 compute-0 nova_compute[189381]: 2025-11-25 11:08:59.611 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:08:59 compute-0 podman[203557]: time="2025-11-25T11:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:08:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:08:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 25 11:09:00 compute-0 podman[257436]: 2025-11-25 11:09:00.963954254 +0000 UTC m=+0.073737807 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:09:00 compute-0 podman[257435]: 2025-11-25 11:09:00.965253342 +0000 UTC m=+0.084100748 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Nov 25 11:09:01 compute-0 openstack_network_exporter[205722]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:09:01 compute-0 openstack_network_exporter[205722]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:09:01 compute-0 openstack_network_exporter[205722]: ERROR   11:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:09:01 compute-0 openstack_network_exporter[205722]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:09:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:09:01 compute-0 openstack_network_exporter[205722]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:09:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.342 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.343 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.358 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'name': 'te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.361 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance dba9274f-6164-41cc-8f4b-870c1cb3f67c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 25 11:09:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:03.362 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/dba9274f-6164-41cc-8f4b-870c1cb3f67c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}a1f72e6be5435435c50078726d2cfcc555ee337db55aab4cb68901d5b9361ea2" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 25 11:09:03 compute-0 nova_compute[189381]: 2025-11-25 11:09:03.642 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.505 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Tue, 25 Nov 2025 11:09:03 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-8459c39c-689f-4753-936b-23607717ed03 x-openstack-request-id: req-8459c39c-689f-4753-936b-23607717ed03 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.505 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "dba9274f-6164-41cc-8f4b-870c1cb3f67c", "name": "te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx", "status": "ACTIVE", "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "user_id": "95acdf386c1e42c8a6da1f7b9603054f", "metadata": {"metering.server_group": "f33016ec-000f-44cf-b7cc-2122723ba143"}, "hostId": "70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162", "image": {"id": "62ab6b08-ec10-4838-aa81-24150af36537", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/62ab6b08-ec10-4838-aa81-24150af36537"}]}, "flavor": {"id": "b7c0626e-febc-4083-b621-6f5ee0740a18", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/b7c0626e-febc-4083-b621-6f5ee0740a18"}]}, "created": "2025-11-25T11:08:23Z", "updated": "2025-11-25T11:08:36Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.181", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:93:2c:2e"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/dba9274f-6164-41cc-8f4b-870c1cb3f67c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/dba9274f-6164-41cc-8f4b-870c1cb3f67c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-25T11:08:36.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.506 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/dba9274f-6164-41cc-8f4b-870c1cb3f67c used request id req-8459c39c-689f-4753-936b-23607717ed03 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.507 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'name': 'te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.508 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:09:04.509025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.514 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.519 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for dba9274f-6164-41cc-8f4b-870c1cb3f67c / tap00b30981-59 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.520 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.521 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.522 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:09:04.522419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.523 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.525 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.525 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:09:04.526242) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.550 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/memory.usage volume: 42.73828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.579 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.579 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance dba9274f-6164-41cc-8f4b-870c1cb3f67c: ceilometer.compute.pollsters.NoVolumeException
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.580 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.581 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.582 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>]
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-25T11:09:04.581272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.584 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes volume: 1436 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.584 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:09:04.584005) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.586 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.587 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:09:04.586406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:09:04.589102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.589 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.590 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.591 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.592 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/cpu volume: 246190000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.592 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/cpu volume: 27850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:09:04.592102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:09:04.594716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.595 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.595 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.596 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:09:04.596840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.612 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.612 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 nova_compute[189381]: 2025-11-25 11:09:04.615 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.638 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.639 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:09:04.641201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.681 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 30145536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.682 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.750 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.750 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.751 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.752 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.752 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 1600810847 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.753 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 68341060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:09:04.752505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.754 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 1154894140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.754 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 2134601 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.755 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.756 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 1092 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:09:04.756254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.756 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.757 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.757 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.758 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.758 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.759 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.759 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:09:04.759160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.759 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.760 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.760 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.761 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.761 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.762 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.763 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:09:04.762052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.763 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.763 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.764 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.765 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.765 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.765 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.765 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.766 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.766 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 10464762727 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.767 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.767 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.767 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:09:04.766149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.767 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.768 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.768 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.769 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.770 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.770 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.771 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:09:04.769907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.771 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.771 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.771 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.772 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.772 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.772 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.773 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:09:04.772897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.773 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 313 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.773 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.774 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.774 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.775 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.775 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.776 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.777 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.777 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.778 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:09:04.776604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.778 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.779 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.779 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.781 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.781 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.781 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:09:04.779245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.782 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.783 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.784 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>]
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.784 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.785 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-25T11:09:04.783436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.785 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.785 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.786 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:09:04.785614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.786 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.787 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.788 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets volume: 11 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:09:04.787817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.788 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.788 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.789 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.789 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.789 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.790 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:09:04.790044) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.790 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.791 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.791 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.791 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.792 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:09:04.791997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.792 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.792 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.793 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.793 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.794 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.794 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.794 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:09:04.794452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.794 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.795 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.795 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:09:04.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:09:04 compute-0 podman[257479]: 2025-11-25 11:09:04.974348972 +0000 UTC m=+0.078618509 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:09:05 compute-0 podman[257478]: 2025-11-25 11:09:05.011365384 +0000 UTC m=+0.119988467 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Nov 25 11:09:06 compute-0 nova_compute[189381]: 2025-11-25 11:09:06.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:08 compute-0 nova_compute[189381]: 2025-11-25 11:09:08.644 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:08 compute-0 podman[257520]: 2025-11-25 11:09:08.949509388 +0000 UTC m=+0.063224463 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:09:09 compute-0 nova_compute[189381]: 2025-11-25 11:09:09.618 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:13 compute-0 nova_compute[189381]: 2025-11-25 11:09:13.647 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:13 compute-0 ovn_controller[97779]: 2025-11-25T11:09:13Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:93:2c:2e 10.100.0.181
Nov 25 11:09:13 compute-0 ovn_controller[97779]: 2025-11-25T11:09:13Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:93:2c:2e 10.100.0.181
Nov 25 11:09:14 compute-0 nova_compute[189381]: 2025-11-25 11:09:14.622 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:17 compute-0 podman[257557]: 2025-11-25 11:09:17.964954346 +0000 UTC m=+0.077958841 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:09:17 compute-0 podman[257558]: 2025-11-25 11:09:17.985652595 +0000 UTC m=+0.097204057 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 11:09:18 compute-0 nova_compute[189381]: 2025-11-25 11:09:18.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:18 compute-0 nova_compute[189381]: 2025-11-25 11:09:18.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 11:09:18 compute-0 nova_compute[189381]: 2025-11-25 11:09:18.650 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:19 compute-0 nova_compute[189381]: 2025-11-25 11:09:19.625 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:20 compute-0 podman[257593]: 2025-11-25 11:09:20.966843759 +0000 UTC m=+0.080890675 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git)
Nov 25 11:09:22 compute-0 ovn_controller[97779]: 2025-11-25T11:09:22Z|00200|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Nov 25 11:09:23 compute-0 nova_compute[189381]: 2025-11-25 11:09:23.652 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:24 compute-0 nova_compute[189381]: 2025-11-25 11:09:24.630 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:27 compute-0 podman[257613]: 2025-11-25 11:09:27.972240031 +0000 UTC m=+0.085144158 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:09:28 compute-0 nova_compute[189381]: 2025-11-25 11:09:28.655 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:29 compute-0 nova_compute[189381]: 2025-11-25 11:09:29.634 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:29 compute-0 podman[203557]: time="2025-11-25T11:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:09:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:09:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 25 11:09:31 compute-0 openstack_network_exporter[205722]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:09:31 compute-0 openstack_network_exporter[205722]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:09:31 compute-0 openstack_network_exporter[205722]: ERROR   11:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:09:31 compute-0 openstack_network_exporter[205722]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:09:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:09:31 compute-0 openstack_network_exporter[205722]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:09:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:09:31 compute-0 podman[257632]: 2025-11-25 11:09:31.964970437 +0000 UTC m=+0.067175007 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:09:31 compute-0 podman[257631]: 2025-11-25 11:09:31.97163404 +0000 UTC m=+0.077794525 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350)
Nov 25 11:09:33 compute-0 nova_compute[189381]: 2025-11-25 11:09:33.659 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:34 compute-0 nova_compute[189381]: 2025-11-25 11:09:34.051 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:34 compute-0 nova_compute[189381]: 2025-11-25 11:09:34.051 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:34 compute-0 nova_compute[189381]: 2025-11-25 11:09:34.636 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:35 compute-0 podman[257673]: 2025-11-25 11:09:35.985115607 +0000 UTC m=+0.094596852 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 25 11:09:36 compute-0 podman[257672]: 2025-11-25 11:09:36.003744937 +0000 UTC m=+0.117844606 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 25 11:09:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:09:36.074 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:09:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:09:36.074 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:09:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:09:36.075 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:09:38 compute-0 nova_compute[189381]: 2025-11-25 11:09:38.660 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:39 compute-0 nova_compute[189381]: 2025-11-25 11:09:39.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:39 compute-0 nova_compute[189381]: 2025-11-25 11:09:39.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:09:39 compute-0 nova_compute[189381]: 2025-11-25 11:09:39.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:09:39 compute-0 nova_compute[189381]: 2025-11-25 11:09:39.639 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:39 compute-0 podman[257716]: 2025-11-25 11:09:39.932474467 +0000 UTC m=+0.051311298 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 11:09:40 compute-0 nova_compute[189381]: 2025-11-25 11:09:40.031 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:09:40 compute-0 nova_compute[189381]: 2025-11-25 11:09:40.032 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:09:40 compute-0 nova_compute[189381]: 2025-11-25 11:09:40.032 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:09:40 compute-0 nova_compute[189381]: 2025-11-25 11:09:40.032 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:09:43 compute-0 nova_compute[189381]: 2025-11-25 11:09:43.662 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.261 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.291 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.292 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.292 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.293 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.293 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.317 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.317 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.318 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.318 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.418 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.500 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.502 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.563 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.577 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.642 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.644 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.645 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:09:44 compute-0 nova_compute[189381]: 2025-11-25 11:09:44.715 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.038 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.040 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5036MB free_disk=72.07188415527344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.040 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.041 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.279 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.280 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.281 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.281 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.334 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.354 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.356 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.357 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.357 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.358 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 11:09:45 compute-0 nova_compute[189381]: 2025-11-25 11:09:45.374 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 11:09:47 compute-0 nova_compute[189381]: 2025-11-25 11:09:47.103 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:47 compute-0 nova_compute[189381]: 2025-11-25 11:09:47.104 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:47 compute-0 nova_compute[189381]: 2025-11-25 11:09:47.105 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:09:48 compute-0 nova_compute[189381]: 2025-11-25 11:09:48.664 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:48 compute-0 podman[257752]: 2025-11-25 11:09:48.955796812 +0000 UTC m=+0.070575966 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:09:48 compute-0 podman[257751]: 2025-11-25 11:09:48.979588972 +0000 UTC m=+0.098416613 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 25 11:09:49 compute-0 nova_compute[189381]: 2025-11-25 11:09:49.646 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:50 compute-0 nova_compute[189381]: 2025-11-25 11:09:50.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:51 compute-0 podman[257789]: 2025-11-25 11:09:51.962653951 +0000 UTC m=+0.080868315 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4, io.openshift.expose-services=, architecture=x86_64, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container)
Nov 25 11:09:53 compute-0 nova_compute[189381]: 2025-11-25 11:09:53.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:09:53 compute-0 nova_compute[189381]: 2025-11-25 11:09:53.666 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:54 compute-0 nova_compute[189381]: 2025-11-25 11:09:54.649 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:58 compute-0 nova_compute[189381]: 2025-11-25 11:09:58.668 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:58 compute-0 podman[257808]: 2025-11-25 11:09:58.949640939 +0000 UTC m=+0.067309221 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:09:59 compute-0 nova_compute[189381]: 2025-11-25 11:09:59.659 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:09:59 compute-0 podman[203557]: time="2025-11-25T11:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:09:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:09:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Nov 25 11:10:01 compute-0 openstack_network_exporter[205722]: ERROR   11:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:10:01 compute-0 openstack_network_exporter[205722]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:10:01 compute-0 openstack_network_exporter[205722]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:10:01 compute-0 openstack_network_exporter[205722]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:10:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:10:01 compute-0 openstack_network_exporter[205722]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:10:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:10:02 compute-0 podman[257824]: 2025-11-25 11:10:02.961915831 +0000 UTC m=+0.070971817 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, distribution-scope=public, version=9.6, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Nov 25 11:10:02 compute-0 podman[257825]: 2025-11-25 11:10:02.975196136 +0000 UTC m=+0.080639138 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:10:03 compute-0 nova_compute[189381]: 2025-11-25 11:10:03.669 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:04 compute-0 nova_compute[189381]: 2025-11-25 11:10:04.660 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:06 compute-0 podman[257867]: 2025-11-25 11:10:06.945242005 +0000 UTC m=+0.059666070 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118)
Nov 25 11:10:07 compute-0 podman[257866]: 2025-11-25 11:10:07.013107371 +0000 UTC m=+0.130968006 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:10:08 compute-0 nova_compute[189381]: 2025-11-25 11:10:08.673 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:09 compute-0 nova_compute[189381]: 2025-11-25 11:10:09.664 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:10 compute-0 podman[257910]: 2025-11-25 11:10:10.943971444 +0000 UTC m=+0.054793249 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:10:12 compute-0 sshd-session[257934]: Connection closed by authenticating user root 171.244.51.45 port 38470 [preauth]
Nov 25 11:10:13 compute-0 nova_compute[189381]: 2025-11-25 11:10:13.676 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:14 compute-0 nova_compute[189381]: 2025-11-25 11:10:14.667 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:18 compute-0 nova_compute[189381]: 2025-11-25 11:10:18.679 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:19 compute-0 nova_compute[189381]: 2025-11-25 11:10:19.671 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:19 compute-0 podman[257942]: 2025-11-25 11:10:19.954070255 +0000 UTC m=+0.065588341 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 11:10:19 compute-0 podman[257943]: 2025-11-25 11:10:19.973139128 +0000 UTC m=+0.081667848 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 25 11:10:22 compute-0 podman[257981]: 2025-11-25 11:10:22.961179391 +0000 UTC m=+0.070339010 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc.)
Nov 25 11:10:23 compute-0 nova_compute[189381]: 2025-11-25 11:10:23.682 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:24 compute-0 nova_compute[189381]: 2025-11-25 11:10:24.674 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:28 compute-0 nova_compute[189381]: 2025-11-25 11:10:28.683 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:29 compute-0 nova_compute[189381]: 2025-11-25 11:10:29.678 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:29 compute-0 podman[203557]: time="2025-11-25T11:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:10:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:10:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Nov 25 11:10:29 compute-0 podman[258002]: 2025-11-25 11:10:29.961530607 +0000 UTC m=+0.068107414 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent)
Nov 25 11:10:31 compute-0 openstack_network_exporter[205722]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:10:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:10:31 compute-0 openstack_network_exporter[205722]: ERROR   11:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:10:31 compute-0 openstack_network_exporter[205722]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:10:31 compute-0 openstack_network_exporter[205722]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:10:31 compute-0 openstack_network_exporter[205722]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:10:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:10:33 compute-0 nova_compute[189381]: 2025-11-25 11:10:33.686 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:33 compute-0 podman[258024]: 2025-11-25 11:10:33.958722932 +0000 UTC m=+0.065282912 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:10:33 compute-0 podman[258023]: 2025-11-25 11:10:33.989915276 +0000 UTC m=+0.099935117 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 11:10:34 compute-0 nova_compute[189381]: 2025-11-25 11:10:34.681 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:35 compute-0 nova_compute[189381]: 2025-11-25 11:10:35.030 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:36 compute-0 nova_compute[189381]: 2025-11-25 11:10:36.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:10:36.075 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:10:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:10:36.075 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:10:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:10:36.076 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:10:37 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 25 11:10:37 compute-0 podman[258066]: 2025-11-25 11:10:37.307344704 +0000 UTC m=+0.078130115 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 25 11:10:37 compute-0 podman[258065]: 2025-11-25 11:10:37.347215219 +0000 UTC m=+0.127504376 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Nov 25 11:10:38 compute-0 nova_compute[189381]: 2025-11-25 11:10:38.689 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:39 compute-0 nova_compute[189381]: 2025-11-25 11:10:39.684 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:40 compute-0 nova_compute[189381]: 2025-11-25 11:10:40.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:40 compute-0 nova_compute[189381]: 2025-11-25 11:10:40.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:10:41 compute-0 nova_compute[189381]: 2025-11-25 11:10:41.090 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:10:41 compute-0 nova_compute[189381]: 2025-11-25 11:10:41.091 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:10:41 compute-0 nova_compute[189381]: 2025-11-25 11:10:41.092 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:10:41 compute-0 podman[258110]: 2025-11-25 11:10:41.931572428 +0000 UTC m=+0.050094142 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:10:43 compute-0 nova_compute[189381]: 2025-11-25 11:10:43.692 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.305 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updating instance_info_cache with network_info: [{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.658 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.659 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.662 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.662 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.666 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.686 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.696 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.697 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.697 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.698 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.779 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.867 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.870 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.937 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:10:44 compute-0 nova_compute[189381]: 2025-11-25 11:10:44.952 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.021 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.023 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.087 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.402 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.403 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5043MB free_disk=72.07141494750977GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.404 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.405 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.589 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.590 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.590 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.591 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.651 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.666 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.668 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:10:45 compute-0 nova_compute[189381]: 2025-11-25 11:10:45.669 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.265s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:10:48 compute-0 nova_compute[189381]: 2025-11-25 11:10:48.694 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:49 compute-0 nova_compute[189381]: 2025-11-25 11:10:49.025 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:49 compute-0 nova_compute[189381]: 2025-11-25 11:10:49.026 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:49 compute-0 nova_compute[189381]: 2025-11-25 11:10:49.027 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:10:49 compute-0 nova_compute[189381]: 2025-11-25 11:10:49.689 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:50 compute-0 podman[258146]: 2025-11-25 11:10:50.952892146 +0000 UTC m=+0.067914019 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 25 11:10:50 compute-0 podman[258147]: 2025-11-25 11:10:50.95718271 +0000 UTC m=+0.071108782 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:10:52 compute-0 nova_compute[189381]: 2025-11-25 11:10:52.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:10:53 compute-0 nova_compute[189381]: 2025-11-25 11:10:53.698 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:53 compute-0 podman[258184]: 2025-11-25 11:10:53.976362695 +0000 UTC m=+0.088010821 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, release=1214.1726694543, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, container_name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.tags=base rhel9)
Nov 25 11:10:54 compute-0 nova_compute[189381]: 2025-11-25 11:10:54.692 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:58 compute-0 nova_compute[189381]: 2025-11-25 11:10:58.702 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:59 compute-0 nova_compute[189381]: 2025-11-25 11:10:59.695 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:10:59 compute-0 podman[203557]: time="2025-11-25T11:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:10:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:10:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 25 11:11:00 compute-0 podman[258204]: 2025-11-25 11:11:00.95423636 +0000 UTC m=+0.069447283 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 25 11:11:01 compute-0 openstack_network_exporter[205722]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:11:01 compute-0 openstack_network_exporter[205722]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:11:01 compute-0 openstack_network_exporter[205722]: ERROR   11:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:11:01 compute-0 openstack_network_exporter[205722]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:11:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:11:01 compute-0 openstack_network_exporter[205722]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:11:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.343 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.344 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.350 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'name': 'te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.354 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'name': 'te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>, <NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>, <NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{'network.outgoing.bytes': [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>, <NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4>, <NovaLikeServer: te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.355 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.357 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:11:03.356937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.360 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.364 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.365 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.366 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.366 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:11:03.365964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.367 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.367 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:11:03.367969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.389 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/memory.usage volume: 42.42578125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.419 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/memory.usage volume: 43.6484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.420 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.422 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.422 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:11:03.421787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.424 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.425 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:11:03.424324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.425 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:11:03.426626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.427 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.427 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.427 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.428 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.429 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/cpu volume: 335460000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:11:03.428940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.429 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/cpu volume: 144720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.431 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.431 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:11:03.431000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:11:03.432571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.447 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.447 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.464 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.465 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.466 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.468 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:11:03.467619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.503 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.504 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.540 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 28810240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.540 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 1630906369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.541 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 77005350 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 1432726245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 109472392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:11:03.541398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.543 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.543 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.543 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 1030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:11:03.543014) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.544 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.545 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.545 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.545 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.546 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.546 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.547 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 73076736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.547 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:11:03.544746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.548 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:11:03.547235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.548 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 72843264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.548 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.550 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 10475151429 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.551 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:11:03.550494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.551 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.551 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 3314898774 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.551 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.552 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.553 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.554 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:11:03.553771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.554 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.555 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.555 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.555 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.556 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:11:03.556231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.556 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.557 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.557 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.557 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:11:03.559504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.560 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.560 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes.delta volume: 1886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.562 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:11:03.561996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.563 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.563 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 30220288 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.563 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.565 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:11:03.565955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.566 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.567 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.568 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:11:03.568081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.568 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.569 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.570 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.571 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:11:03.570812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.573 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:11:03.572788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.573 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.575 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:11:03.575322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.575 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.576 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:11:03.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:11:03 compute-0 nova_compute[189381]: 2025-11-25 11:11:03.704 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:04 compute-0 nova_compute[189381]: 2025-11-25 11:11:04.698 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:04 compute-0 podman[258222]: 2025-11-25 11:11:04.962009742 +0000 UTC m=+0.079672360 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 11:11:04 compute-0 podman[258223]: 2025-11-25 11:11:04.983071372 +0000 UTC m=+0.097187047 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 11:11:06 compute-0 nova_compute[189381]: 2025-11-25 11:11:06.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:07 compute-0 podman[258266]: 2025-11-25 11:11:07.964053151 +0000 UTC m=+0.067147617 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118)
Nov 25 11:11:08 compute-0 podman[258265]: 2025-11-25 11:11:08.019629581 +0000 UTC m=+0.125280871 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:11:08 compute-0 nova_compute[189381]: 2025-11-25 11:11:08.708 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:09 compute-0 nova_compute[189381]: 2025-11-25 11:11:09.701 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:12 compute-0 podman[258309]: 2025-11-25 11:11:12.955020411 +0000 UTC m=+0.070229486 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 11:11:13 compute-0 nova_compute[189381]: 2025-11-25 11:11:13.712 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:14 compute-0 nova_compute[189381]: 2025-11-25 11:11:14.703 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:18 compute-0 nova_compute[189381]: 2025-11-25 11:11:18.714 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:19 compute-0 nova_compute[189381]: 2025-11-25 11:11:19.706 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:21 compute-0 podman[258334]: 2025-11-25 11:11:21.958125991 +0000 UTC m=+0.070657158 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251118, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 25 11:11:21 compute-0 podman[258335]: 2025-11-25 11:11:21.96672629 +0000 UTC m=+0.074720956 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi)
Nov 25 11:11:23 compute-0 nova_compute[189381]: 2025-11-25 11:11:23.717 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:24 compute-0 nova_compute[189381]: 2025-11-25 11:11:24.709 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:24 compute-0 podman[258373]: 2025-11-25 11:11:24.95326825 +0000 UTC m=+0.072871333 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, release=1214.1726694543, architecture=x86_64, container_name=kepler, com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9)
Nov 25 11:11:28 compute-0 nova_compute[189381]: 2025-11-25 11:11:28.721 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:29 compute-0 nova_compute[189381]: 2025-11-25 11:11:29.712 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:29 compute-0 podman[203557]: time="2025-11-25T11:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:11:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:11:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 25 11:11:31 compute-0 openstack_network_exporter[205722]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:11:31 compute-0 openstack_network_exporter[205722]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:11:31 compute-0 openstack_network_exporter[205722]: ERROR   11:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:11:31 compute-0 openstack_network_exporter[205722]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:11:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:11:31 compute-0 openstack_network_exporter[205722]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:11:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:11:31 compute-0 podman[258393]: 2025-11-25 11:11:31.93892274 +0000 UTC m=+0.056025035 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:11:33 compute-0 nova_compute[189381]: 2025-11-25 11:11:33.725 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:34 compute-0 nova_compute[189381]: 2025-11-25 11:11:34.715 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:35 compute-0 podman[258413]: 2025-11-25 11:11:35.953235631 +0000 UTC m=+0.062911934 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 11:11:35 compute-0 podman[258412]: 2025-11-25 11:11:35.967404061 +0000 UTC m=+0.081809861 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Nov 25 11:11:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:11:36.075 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:11:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:11:36.076 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:11:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:11:36.076 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:11:37 compute-0 nova_compute[189381]: 2025-11-25 11:11:37.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:38 compute-0 nova_compute[189381]: 2025-11-25 11:11:38.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:38 compute-0 nova_compute[189381]: 2025-11-25 11:11:38.730 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:38 compute-0 podman[258455]: 2025-11-25 11:11:38.981106249 +0000 UTC m=+0.097084845 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:11:38 compute-0 podman[258454]: 2025-11-25 11:11:38.986503355 +0000 UTC m=+0.106440975 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 11:11:39 compute-0 nova_compute[189381]: 2025-11-25 11:11:39.719 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:40 compute-0 nova_compute[189381]: 2025-11-25 11:11:40.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:40 compute-0 nova_compute[189381]: 2025-11-25 11:11:40.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:11:40 compute-0 nova_compute[189381]: 2025-11-25 11:11:40.024 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:11:41 compute-0 nova_compute[189381]: 2025-11-25 11:11:41.099 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:11:41 compute-0 nova_compute[189381]: 2025-11-25 11:11:41.100 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:11:41 compute-0 nova_compute[189381]: 2025-11-25 11:11:41.101 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:11:41 compute-0 nova_compute[189381]: 2025-11-25 11:11:41.102 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:11:43 compute-0 nova_compute[189381]: 2025-11-25 11:11:43.734 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:43 compute-0 podman[258497]: 2025-11-25 11:11:43.941888065 +0000 UTC m=+0.056551679 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.720 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.810 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.827 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.828 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.829 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.829 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.852 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.853 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.853 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.854 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:11:44 compute-0 nova_compute[189381]: 2025-11-25 11:11:44.936 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.024 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.026 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.092 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.100 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.165 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.167 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.230 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.552 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.554 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5048MB free_disk=72.07048034667969GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.555 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.555 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.652 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.653 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.653 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.654 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.709 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.721 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.724 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.724 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:11:45 compute-0 nova_compute[189381]: 2025-11-25 11:11:45.917 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:47 compute-0 nova_compute[189381]: 2025-11-25 11:11:47.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:48 compute-0 nova_compute[189381]: 2025-11-25 11:11:48.736 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:49 compute-0 nova_compute[189381]: 2025-11-25 11:11:49.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:49 compute-0 nova_compute[189381]: 2025-11-25 11:11:49.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:11:49 compute-0 nova_compute[189381]: 2025-11-25 11:11:49.723 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:52 compute-0 podman[258535]: 2025-11-25 11:11:52.956855478 +0000 UTC m=+0.070597947 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 25 11:11:52 compute-0 podman[258536]: 2025-11-25 11:11:52.965613281 +0000 UTC m=+0.074357785 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:11:53 compute-0 nova_compute[189381]: 2025-11-25 11:11:53.738 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:54 compute-0 nova_compute[189381]: 2025-11-25 11:11:54.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:11:54 compute-0 nova_compute[189381]: 2025-11-25 11:11:54.725 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:55 compute-0 podman[258574]: 2025-11-25 11:11:55.954015785 +0000 UTC m=+0.063659656 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Nov 25 11:11:58 compute-0 nova_compute[189381]: 2025-11-25 11:11:58.740 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:59 compute-0 nova_compute[189381]: 2025-11-25 11:11:59.728 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:11:59 compute-0 podman[203557]: time="2025-11-25T11:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:11:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:11:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 25 11:12:01 compute-0 openstack_network_exporter[205722]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:12:01 compute-0 openstack_network_exporter[205722]: ERROR   11:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:12:01 compute-0 openstack_network_exporter[205722]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:12:01 compute-0 openstack_network_exporter[205722]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:12:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:12:01 compute-0 openstack_network_exporter[205722]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:12:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:12:02 compute-0 podman[258594]: 2025-11-25 11:12:02.94399819 +0000 UTC m=+0.063776058 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible)
Nov 25 11:12:03 compute-0 nova_compute[189381]: 2025-11-25 11:12:03.742 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:04 compute-0 nova_compute[189381]: 2025-11-25 11:12:04.730 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:06 compute-0 podman[258613]: 2025-11-25 11:12:06.978709273 +0000 UTC m=+0.090895325 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:12:06 compute-0 podman[258612]: 2025-11-25 11:12:06.984907482 +0000 UTC m=+0.099847014 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350)
Nov 25 11:12:08 compute-0 nova_compute[189381]: 2025-11-25 11:12:08.745 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:09 compute-0 nova_compute[189381]: 2025-11-25 11:12:09.733 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:09 compute-0 podman[258656]: 2025-11-25 11:12:09.950100784 +0000 UTC m=+0.062791240 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:12:09 compute-0 podman[258655]: 2025-11-25 11:12:09.987017854 +0000 UTC m=+0.100209205 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:12:13 compute-0 nova_compute[189381]: 2025-11-25 11:12:13.747 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:14 compute-0 podman[258700]: 2025-11-25 11:12:14.734852379 +0000 UTC m=+0.061543064 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:12:14 compute-0 nova_compute[189381]: 2025-11-25 11:12:14.736 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:18 compute-0 nova_compute[189381]: 2025-11-25 11:12:18.749 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:19 compute-0 nova_compute[189381]: 2025-11-25 11:12:19.739 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:23 compute-0 nova_compute[189381]: 2025-11-25 11:12:23.752 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:23 compute-0 podman[258726]: 2025-11-25 11:12:23.943309579 +0000 UTC m=+0.062541183 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:12:23 compute-0 podman[258727]: 2025-11-25 11:12:23.960650102 +0000 UTC m=+0.074113849 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm)
Nov 25 11:12:24 compute-0 nova_compute[189381]: 2025-11-25 11:12:24.741 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:26 compute-0 podman[258764]: 2025-11-25 11:12:26.952694671 +0000 UTC m=+0.069539806 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, container_name=kepler, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0)
Nov 25 11:12:28 compute-0 nova_compute[189381]: 2025-11-25 11:12:28.756 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:29 compute-0 podman[203557]: time="2025-11-25T11:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:12:29 compute-0 nova_compute[189381]: 2025-11-25 11:12:29.744 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:12:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 25 11:12:31 compute-0 openstack_network_exporter[205722]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:12:31 compute-0 openstack_network_exporter[205722]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:12:31 compute-0 openstack_network_exporter[205722]: ERROR   11:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:12:31 compute-0 openstack_network_exporter[205722]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:12:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:12:31 compute-0 openstack_network_exporter[205722]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:12:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:12:33 compute-0 nova_compute[189381]: 2025-11-25 11:12:33.760 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:33 compute-0 podman[258784]: 2025-11-25 11:12:33.974282522 +0000 UTC m=+0.088099154 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:12:34 compute-0 nova_compute[189381]: 2025-11-25 11:12:34.746 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:12:36.077 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:12:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:12:36.077 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:12:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:12:36.078 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:12:37 compute-0 podman[258803]: 2025-11-25 11:12:37.954436654 +0000 UTC m=+0.068005272 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:12:37 compute-0 podman[258802]: 2025-11-25 11:12:37.975737341 +0000 UTC m=+0.093529991 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Nov 25 11:12:38 compute-0 nova_compute[189381]: 2025-11-25 11:12:38.764 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:39 compute-0 nova_compute[189381]: 2025-11-25 11:12:39.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:39 compute-0 nova_compute[189381]: 2025-11-25 11:12:39.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:39 compute-0 nova_compute[189381]: 2025-11-25 11:12:39.749 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:40 compute-0 podman[258845]: 2025-11-25 11:12:40.957942435 +0000 UTC m=+0.073120650 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:12:41 compute-0 podman[258844]: 2025-11-25 11:12:41.016859522 +0000 UTC m=+0.135720634 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 25 11:12:42 compute-0 nova_compute[189381]: 2025-11-25 11:12:42.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:42 compute-0 nova_compute[189381]: 2025-11-25 11:12:42.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:12:43 compute-0 nova_compute[189381]: 2025-11-25 11:12:43.187 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:12:43 compute-0 nova_compute[189381]: 2025-11-25 11:12:43.188 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:12:43 compute-0 nova_compute[189381]: 2025-11-25 11:12:43.188 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:12:43 compute-0 nova_compute[189381]: 2025-11-25 11:12:43.765 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:44 compute-0 nova_compute[189381]: 2025-11-25 11:12:44.752 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:44 compute-0 podman[258890]: 2025-11-25 11:12:44.952421211 +0000 UTC m=+0.069488504 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.323 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updating instance_info_cache with network_info: [{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.338 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.338 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.339 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.340 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.340 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.363 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.363 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.363 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.364 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.519 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.608 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.610 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.672 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.683 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.747 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.748 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:12:45 compute-0 nova_compute[189381]: 2025-11-25 11:12:45.817 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.204 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.207 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5031MB free_disk=72.07059097290039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.208 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.209 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.317 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.318 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.319 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.319 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.408 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.429 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.431 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:12:46 compute-0 nova_compute[189381]: 2025-11-25 11:12:46.432 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:12:48 compute-0 nova_compute[189381]: 2025-11-25 11:12:48.768 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:49 compute-0 nova_compute[189381]: 2025-11-25 11:12:49.754 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:52 compute-0 nova_compute[189381]: 2025-11-25 11:12:52.115 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:52 compute-0 nova_compute[189381]: 2025-11-25 11:12:52.116 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:52 compute-0 nova_compute[189381]: 2025-11-25 11:12:52.116 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:12:53 compute-0 nova_compute[189381]: 2025-11-25 11:12:53.770 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:54 compute-0 nova_compute[189381]: 2025-11-25 11:12:54.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:12:54 compute-0 nova_compute[189381]: 2025-11-25 11:12:54.756 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:54 compute-0 podman[258927]: 2025-11-25 11:12:54.95288521 +0000 UTC m=+0.066532859 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 25 11:12:54 compute-0 podman[258928]: 2025-11-25 11:12:54.957336479 +0000 UTC m=+0.067562138 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:12:57 compute-0 podman[258966]: 2025-11-25 11:12:57.947057531 +0000 UTC m=+0.066116517 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.buildah.version=1.29.0, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, com.redhat.component=ubi9-container, release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 25 11:12:58 compute-0 nova_compute[189381]: 2025-11-25 11:12:58.773 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:12:59 compute-0 podman[203557]: time="2025-11-25T11:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:12:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:12:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 25 11:12:59 compute-0 nova_compute[189381]: 2025-11-25 11:12:59.759 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:01 compute-0 openstack_network_exporter[205722]: ERROR   11:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:13:01 compute-0 openstack_network_exporter[205722]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:13:01 compute-0 openstack_network_exporter[205722]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:13:01 compute-0 openstack_network_exporter[205722]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:13:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:13:01 compute-0 openstack_network_exporter[205722]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:13:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.343 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.343 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.350 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'name': 'te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.354 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'name': 'te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.354 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:13:03.354677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.359 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.363 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.364 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:13:03.364851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.365 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.366 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.366 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:13:03.366959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.388 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/memory.usage volume: 42.421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.407 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/memory.usage volume: 43.6484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.408 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.408 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:13:03.408938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.409 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.410 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:13:03.410719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.411 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.411 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.412 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:13:03.412281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.412 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.413 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.413 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:13:03.413829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.414 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/cpu volume: 336630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.414 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/cpu volume: 264290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.415 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:13:03.415344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.416 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.416 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.416 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:13:03.416902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.429 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.430 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.442 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.443 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.443 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:13:03.444192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.482 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.482 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.516 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 28810240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.517 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.518 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.518 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.518 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:13:03.518765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.519 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 1630906369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.519 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 77005350 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.520 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 1432726245 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.520 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 109472392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.520 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:13:03.520970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.521 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.521 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.522 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 1030 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.522 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.522 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:13:03.523141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.523 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.523 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.524 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.524 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:13:03.524944) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.525 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.525 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.525 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.525 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 72843264 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.526 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.526 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:13:03.526958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.527 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 11156943053 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.527 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 3314898774 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:13:03.528878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.529 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.529 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.530 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:13:03.530280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.530 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.530 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.531 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.531 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.531 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.532 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:13:03.532317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.532 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.533 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.533 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.533 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:13:03.533810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.533 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.534 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.534 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.534 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 30220288 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.535 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.535 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.535 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.536 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:13:03.536070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.536 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.537 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.537 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:13:03.537828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.537 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.538 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.538 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.539 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:13:03.539654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.540 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.540 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.541 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:13:03.541216) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.542 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.542 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.543 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:13:03.543344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.544 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.544 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:13:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:13:03 compute-0 nova_compute[189381]: 2025-11-25 11:13:03.775 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:04 compute-0 nova_compute[189381]: 2025-11-25 11:13:04.762 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:04 compute-0 podman[258988]: 2025-11-25 11:13:04.937408807 +0000 UTC m=+0.055997003 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 25 11:13:08 compute-0 nova_compute[189381]: 2025-11-25 11:13:08.818 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:08 compute-0 podman[259007]: 2025-11-25 11:13:08.966949649 +0000 UTC m=+0.073531531 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, build-date=2025-08-20T13:12:41, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 25 11:13:08 compute-0 podman[259008]: 2025-11-25 11:13:08.982779968 +0000 UTC m=+0.086052864 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 11:13:09 compute-0 nova_compute[189381]: 2025-11-25 11:13:09.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:09 compute-0 nova_compute[189381]: 2025-11-25 11:13:09.765 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:11 compute-0 podman[259050]: 2025-11-25 11:13:11.959236876 +0000 UTC m=+0.070097943 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:13:12 compute-0 podman[259049]: 2025-11-25 11:13:12.01979693 +0000 UTC m=+0.132403747 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 11:13:13 compute-0 nova_compute[189381]: 2025-11-25 11:13:13.779 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:14 compute-0 nova_compute[189381]: 2025-11-25 11:13:14.769 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:15 compute-0 podman[259093]: 2025-11-25 11:13:15.960689824 +0000 UTC m=+0.070310448 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:13:18 compute-0 nova_compute[189381]: 2025-11-25 11:13:18.782 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:19 compute-0 nova_compute[189381]: 2025-11-25 11:13:19.772 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:23 compute-0 nova_compute[189381]: 2025-11-25 11:13:23.784 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:24 compute-0 nova_compute[189381]: 2025-11-25 11:13:24.776 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:25 compute-0 podman[259119]: 2025-11-25 11:13:25.953787729 +0000 UTC m=+0.064707606 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Nov 25 11:13:25 compute-0 podman[259118]: 2025-11-25 11:13:25.987832195 +0000 UTC m=+0.103041846 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:13:28 compute-0 nova_compute[189381]: 2025-11-25 11:13:28.790 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:28 compute-0 podman[259154]: 2025-11-25 11:13:28.952860772 +0000 UTC m=+0.068412774 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Nov 25 11:13:29 compute-0 podman[203557]: time="2025-11-25T11:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:13:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:13:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 25 11:13:29 compute-0 nova_compute[189381]: 2025-11-25 11:13:29.779 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:31 compute-0 openstack_network_exporter[205722]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:13:31 compute-0 openstack_network_exporter[205722]: ERROR   11:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:13:31 compute-0 openstack_network_exporter[205722]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:13:31 compute-0 openstack_network_exporter[205722]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:13:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:13:31 compute-0 openstack_network_exporter[205722]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:13:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:13:33 compute-0 nova_compute[189381]: 2025-11-25 11:13:33.791 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:34 compute-0 nova_compute[189381]: 2025-11-25 11:13:34.781 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:35 compute-0 podman[259173]: 2025-11-25 11:13:35.968670077 +0000 UTC m=+0.087395444 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 25 11:13:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:13:36.079 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:13:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:13:36.079 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:13:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:13:36.079 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:13:38 compute-0 nova_compute[189381]: 2025-11-25 11:13:38.793 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:39 compute-0 nova_compute[189381]: 2025-11-25 11:13:39.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:39 compute-0 nova_compute[189381]: 2025-11-25 11:13:39.785 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:39 compute-0 podman[259192]: 2025-11-25 11:13:39.943777471 +0000 UTC m=+0.062204754 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public)
Nov 25 11:13:39 compute-0 podman[259193]: 2025-11-25 11:13:39.974373267 +0000 UTC m=+0.088734832 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 11:13:41 compute-0 nova_compute[189381]: 2025-11-25 11:13:41.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:42 compute-0 podman[259234]: 2025-11-25 11:13:42.973943985 +0000 UTC m=+0.088515815 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:13:42 compute-0 podman[259233]: 2025-11-25 11:13:42.977358344 +0000 UTC m=+0.095305812 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.058 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.059 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.059 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.059 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.144 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.205 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.206 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.268 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.274 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.334 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.335 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.394 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.743 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.745 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5030MB free_disk=72.07057189941406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.745 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.746 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.796 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.827 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.828 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.828 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.828 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.869 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.986 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 11:13:43 compute-0 nova_compute[189381]: 2025-11-25 11:13:43.987 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 11:13:44 compute-0 nova_compute[189381]: 2025-11-25 11:13:44.003 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 11:13:44 compute-0 nova_compute[189381]: 2025-11-25 11:13:44.035 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 11:13:44 compute-0 nova_compute[189381]: 2025-11-25 11:13:44.128 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:13:44 compute-0 nova_compute[189381]: 2025-11-25 11:13:44.144 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:13:44 compute-0 nova_compute[189381]: 2025-11-25 11:13:44.146 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:13:44 compute-0 nova_compute[189381]: 2025-11-25 11:13:44.146 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:13:44 compute-0 nova_compute[189381]: 2025-11-25 11:13:44.787 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:45 compute-0 nova_compute[189381]: 2025-11-25 11:13:45.147 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:45 compute-0 nova_compute[189381]: 2025-11-25 11:13:45.148 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:13:45 compute-0 nova_compute[189381]: 2025-11-25 11:13:45.149 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:13:46 compute-0 nova_compute[189381]: 2025-11-25 11:13:46.322 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:13:46 compute-0 nova_compute[189381]: 2025-11-25 11:13:46.324 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:13:46 compute-0 nova_compute[189381]: 2025-11-25 11:13:46.324 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:13:46 compute-0 nova_compute[189381]: 2025-11-25 11:13:46.325 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:13:46 compute-0 podman[259291]: 2025-11-25 11:13:46.942751987 +0000 UTC m=+0.060900616 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:13:48 compute-0 nova_compute[189381]: 2025-11-25 11:13:48.687 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:13:48 compute-0 nova_compute[189381]: 2025-11-25 11:13:48.706 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:13:48 compute-0 nova_compute[189381]: 2025-11-25 11:13:48.707 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:13:48 compute-0 nova_compute[189381]: 2025-11-25 11:13:48.707 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:48 compute-0 nova_compute[189381]: 2025-11-25 11:13:48.708 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:48 compute-0 nova_compute[189381]: 2025-11-25 11:13:48.798 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:49 compute-0 nova_compute[189381]: 2025-11-25 11:13:49.790 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:52 compute-0 nova_compute[189381]: 2025-11-25 11:13:52.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:52 compute-0 nova_compute[189381]: 2025-11-25 11:13:52.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:52 compute-0 nova_compute[189381]: 2025-11-25 11:13:52.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:13:53 compute-0 nova_compute[189381]: 2025-11-25 11:13:53.800 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:54 compute-0 nova_compute[189381]: 2025-11-25 11:13:54.793 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:55 compute-0 nova_compute[189381]: 2025-11-25 11:13:55.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:13:56 compute-0 podman[259315]: 2025-11-25 11:13:56.9512579 +0000 UTC m=+0.061516254 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 25 11:13:56 compute-0 podman[259314]: 2025-11-25 11:13:56.957880492 +0000 UTC m=+0.070606347 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 25 11:13:58 compute-0 nova_compute[189381]: 2025-11-25 11:13:58.802 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:59 compute-0 podman[203557]: time="2025-11-25T11:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:13:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:13:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 25 11:13:59 compute-0 nova_compute[189381]: 2025-11-25 11:13:59.796 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:13:59 compute-0 podman[259353]: 2025-11-25 11:13:59.966082049 +0000 UTC m=+0.070771392 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 25 11:14:00 compute-0 sshd-session[259351]: Connection closed by authenticating user root 171.244.51.45 port 40466 [preauth]
Nov 25 11:14:01 compute-0 openstack_network_exporter[205722]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:14:01 compute-0 openstack_network_exporter[205722]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:14:01 compute-0 openstack_network_exporter[205722]: ERROR   11:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:14:01 compute-0 openstack_network_exporter[205722]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:14:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:14:01 compute-0 openstack_network_exporter[205722]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:14:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:14:03 compute-0 nova_compute[189381]: 2025-11-25 11:14:03.804 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:04 compute-0 nova_compute[189381]: 2025-11-25 11:14:04.799 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:06 compute-0 podman[259372]: 2025-11-25 11:14:06.94451853 +0000 UTC m=+0.060736331 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent)
Nov 25 11:14:08 compute-0 nova_compute[189381]: 2025-11-25 11:14:08.806 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:09 compute-0 nova_compute[189381]: 2025-11-25 11:14:09.802 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:10 compute-0 podman[259392]: 2025-11-25 11:14:10.948667326 +0000 UTC m=+0.058960299 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:14:10 compute-0 podman[259391]: 2025-11-25 11:14:10.949875771 +0000 UTC m=+0.064037066 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 11:14:13 compute-0 nova_compute[189381]: 2025-11-25 11:14:13.809 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:13 compute-0 podman[259438]: 2025-11-25 11:14:13.953017502 +0000 UTC m=+0.065579611 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:14:14 compute-0 podman[259437]: 2025-11-25 11:14:14.007972985 +0000 UTC m=+0.123223532 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:14:14 compute-0 nova_compute[189381]: 2025-11-25 11:14:14.805 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:17 compute-0 podman[259480]: 2025-11-25 11:14:17.952666658 +0000 UTC m=+0.061861913 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:14:18 compute-0 nova_compute[189381]: 2025-11-25 11:14:18.811 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:19 compute-0 nova_compute[189381]: 2025-11-25 11:14:19.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:19 compute-0 nova_compute[189381]: 2025-11-25 11:14:19.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 11:14:19 compute-0 nova_compute[189381]: 2025-11-25 11:14:19.807 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:23 compute-0 nova_compute[189381]: 2025-11-25 11:14:23.813 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:24 compute-0 nova_compute[189381]: 2025-11-25 11:14:24.811 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:27 compute-0 podman[259506]: 2025-11-25 11:14:27.964905682 +0000 UTC m=+0.070491908 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 25 11:14:27 compute-0 podman[259507]: 2025-11-25 11:14:27.964974254 +0000 UTC m=+0.067129880 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3)
Nov 25 11:14:28 compute-0 nova_compute[189381]: 2025-11-25 11:14:28.816 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:29 compute-0 podman[203557]: time="2025-11-25T11:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:14:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:14:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 25 11:14:29 compute-0 nova_compute[189381]: 2025-11-25 11:14:29.814 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:30 compute-0 podman[259541]: 2025-11-25 11:14:30.968312533 +0000 UTC m=+0.077824564 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, name=ubi9)
Nov 25 11:14:31 compute-0 openstack_network_exporter[205722]: ERROR   11:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:14:31 compute-0 openstack_network_exporter[205722]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:14:31 compute-0 openstack_network_exporter[205722]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:14:31 compute-0 openstack_network_exporter[205722]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:14:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:14:31 compute-0 openstack_network_exporter[205722]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:14:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:14:33 compute-0 nova_compute[189381]: 2025-11-25 11:14:33.819 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:34 compute-0 nova_compute[189381]: 2025-11-25 11:14:34.816 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:14:36.080 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:14:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:14:36.080 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:14:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:14:36.081 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:14:37 compute-0 podman[259562]: 2025-11-25 11:14:37.973613015 +0000 UTC m=+0.086315117 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:14:38 compute-0 nova_compute[189381]: 2025-11-25 11:14:38.823 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:39 compute-0 nova_compute[189381]: 2025-11-25 11:14:39.033 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:39 compute-0 nova_compute[189381]: 2025-11-25 11:14:39.819 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:42 compute-0 podman[259582]: 2025-11-25 11:14:42.003826553 +0000 UTC m=+0.105164177 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:14:42 compute-0 podman[259581]: 2025-11-25 11:14:42.014426588 +0000 UTC m=+0.125511862 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 11:14:42 compute-0 nova_compute[189381]: 2025-11-25 11:14:42.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:43 compute-0 nova_compute[189381]: 2025-11-25 11:14:43.823 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:44 compute-0 podman[259622]: 2025-11-25 11:14:44.734035329 +0000 UTC m=+0.061225727 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 11:14:44 compute-0 podman[259621]: 2025-11-25 11:14:44.784522687 +0000 UTC m=+0.113893398 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 11:14:44 compute-0 nova_compute[189381]: 2025-11-25 11:14:44.821 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.051 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.131 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.191 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.192 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.249 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.256 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.316 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.317 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.386 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.725 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.727 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4969MB free_disk=72.07057189941406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.727 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.727 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.807 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.807 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.808 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.808 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.919 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.935 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.936 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:14:45 compute-0 nova_compute[189381]: 2025-11-25 11:14:45.937 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.209s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:14:46 compute-0 nova_compute[189381]: 2025-11-25 11:14:46.938 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:46 compute-0 nova_compute[189381]: 2025-11-25 11:14:46.939 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:14:47 compute-0 nova_compute[189381]: 2025-11-25 11:14:47.393 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:14:47 compute-0 nova_compute[189381]: 2025-11-25 11:14:47.394 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:14:47 compute-0 nova_compute[189381]: 2025-11-25 11:14:47.394 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:14:48 compute-0 nova_compute[189381]: 2025-11-25 11:14:48.411 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updating instance_info_cache with network_info: [{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:14:48 compute-0 nova_compute[189381]: 2025-11-25 11:14:48.423 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:14:48 compute-0 nova_compute[189381]: 2025-11-25 11:14:48.424 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:14:48 compute-0 nova_compute[189381]: 2025-11-25 11:14:48.424 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:48 compute-0 nova_compute[189381]: 2025-11-25 11:14:48.425 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:48 compute-0 nova_compute[189381]: 2025-11-25 11:14:48.826 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:48 compute-0 podman[259679]: 2025-11-25 11:14:48.947935767 +0000 UTC m=+0.065149139 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:14:49 compute-0 nova_compute[189381]: 2025-11-25 11:14:49.823 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:50 compute-0 nova_compute[189381]: 2025-11-25 11:14:50.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:50 compute-0 nova_compute[189381]: 2025-11-25 11:14:50.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:50 compute-0 nova_compute[189381]: 2025-11-25 11:14:50.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 11:14:50 compute-0 nova_compute[189381]: 2025-11-25 11:14:50.039 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 11:14:53 compute-0 nova_compute[189381]: 2025-11-25 11:14:53.040 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:53 compute-0 nova_compute[189381]: 2025-11-25 11:14:53.041 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:14:53 compute-0 nova_compute[189381]: 2025-11-25 11:14:53.828 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:54 compute-0 nova_compute[189381]: 2025-11-25 11:14:54.826 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:55 compute-0 nova_compute[189381]: 2025-11-25 11:14:55.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:14:58 compute-0 nova_compute[189381]: 2025-11-25 11:14:58.830 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:14:58 compute-0 podman[259718]: 2025-11-25 11:14:58.949002909 +0000 UTC m=+0.065356216 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 25 11:14:58 compute-0 podman[259719]: 2025-11-25 11:14:58.97797712 +0000 UTC m=+0.091085054 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 25 11:14:59 compute-0 podman[203557]: time="2025-11-25T11:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:14:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:14:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 25 11:14:59 compute-0 nova_compute[189381]: 2025-11-25 11:14:59.830 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:00 compute-0 sshd-session[259677]: Connection closed by authenticating user root 210.16.180.226 port 43548 [preauth]
Nov 25 11:15:01 compute-0 nova_compute[189381]: 2025-11-25 11:15:01.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:01 compute-0 openstack_network_exporter[205722]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:15:01 compute-0 openstack_network_exporter[205722]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:15:01 compute-0 openstack_network_exporter[205722]: ERROR   11:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:15:01 compute-0 openstack_network_exporter[205722]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:15:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:15:01 compute-0 openstack_network_exporter[205722]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:15:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:15:01 compute-0 podman[259756]: 2025-11-25 11:15:01.966473366 +0000 UTC m=+0.073334125 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.343 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.344 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240816eba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.350 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'name': 'te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.353 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'name': 'te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.354 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:15:03.354397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.358 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.361 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.362 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.362 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.363 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.363 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:15:03.362539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:15:03.364258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.383 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/memory.usage volume: 42.421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.402 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/memory.usage volume: 42.65625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.403 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.403 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.403 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.404 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.404 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.404 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:15:03.404145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.405 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.406 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:15:03.405480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.407 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.407 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.407 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.408 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:15:03.407074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.408 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/cpu volume: 337770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/cpu volume: 335570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:15:03.408694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.409 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.410 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.410 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:15:03.409886) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:15:03.411066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.424 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.424 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.438 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.438 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.439 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:15:03.439838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.477 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.478 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.515 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 29710848 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.516 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.517 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.517 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.517 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 1630906369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.518 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 77005350 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.518 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 1460406255 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.518 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 117578287 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.519 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.519 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:15:03.517715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.520 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.520 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.521 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 1068 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.521 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:15:03.520477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.522 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.523 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.524 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:15:03.522976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.524 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.525 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.525 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.526 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.526 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.526 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.527 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:15:03.526171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.528 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.528 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 11156943053 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.529 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.529 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 3366681665 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.529 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:15:03.528799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.530 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.531 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:15:03.531006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.531 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.532 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.533 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.534 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:15:03.532429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.534 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.534 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.535 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.535 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:15:03.534316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.535 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.535 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.535 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.536 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.536 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:15:03.535932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.536 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 30220288 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.537 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.537 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.537 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.537 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.538 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.538 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:15:03.538102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.539 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.539 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.540 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:15:03.539106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.540 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:15:03.540417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.541 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.541 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.541 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.541 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.541 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.541 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.542 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:15:03.541708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.543 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:15:03.543326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.543 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.544 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.545 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.546 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:15:03.547 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:15:03 compute-0 nova_compute[189381]: 2025-11-25 11:15:03.833 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:04 compute-0 nova_compute[189381]: 2025-11-25 11:15:04.833 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:08 compute-0 nova_compute[189381]: 2025-11-25 11:15:08.835 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:08 compute-0 podman[259775]: 2025-11-25 11:15:08.93998536 +0000 UTC m=+0.055373170 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 25 11:15:09 compute-0 nova_compute[189381]: 2025-11-25 11:15:09.835 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:12 compute-0 podman[259793]: 2025-11-25 11:15:12.952880648 +0000 UTC m=+0.069000970 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 25 11:15:12 compute-0 podman[259794]: 2025-11-25 11:15:12.954810433 +0000 UTC m=+0.066034745 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 11:15:13 compute-0 nova_compute[189381]: 2025-11-25 11:15:13.028 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:13 compute-0 nova_compute[189381]: 2025-11-25 11:15:13.838 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:14 compute-0 nova_compute[189381]: 2025-11-25 11:15:14.837 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:14 compute-0 podman[259837]: 2025-11-25 11:15:14.970482044 +0000 UTC m=+0.072718517 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 25 11:15:15 compute-0 podman[259836]: 2025-11-25 11:15:15.029725043 +0000 UTC m=+0.133836010 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:15:18 compute-0 nova_compute[189381]: 2025-11-25 11:15:18.839 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:19 compute-0 nova_compute[189381]: 2025-11-25 11:15:19.840 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:19 compute-0 podman[259882]: 2025-11-25 11:15:19.962061766 +0000 UTC m=+0.071041029 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 11:15:23 compute-0 nova_compute[189381]: 2025-11-25 11:15:23.841 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:24 compute-0 nova_compute[189381]: 2025-11-25 11:15:24.844 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:28 compute-0 nova_compute[189381]: 2025-11-25 11:15:28.843 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:29 compute-0 podman[203557]: time="2025-11-25T11:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:15:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:15:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 25 11:15:29 compute-0 nova_compute[189381]: 2025-11-25 11:15:29.846 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:29 compute-0 podman[259906]: 2025-11-25 11:15:29.956279161 +0000 UTC m=+0.074175699 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 25 11:15:29 compute-0 podman[259907]: 2025-11-25 11:15:29.957025032 +0000 UTC m=+0.067436905 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:15:31 compute-0 openstack_network_exporter[205722]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:15:31 compute-0 openstack_network_exporter[205722]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:15:31 compute-0 openstack_network_exporter[205722]: ERROR   11:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:15:31 compute-0 openstack_network_exporter[205722]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:15:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:15:31 compute-0 openstack_network_exporter[205722]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:15:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:15:32 compute-0 podman[259941]: 2025-11-25 11:15:32.956519714 +0000 UTC m=+0.071751840 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container)
Nov 25 11:15:33 compute-0 nova_compute[189381]: 2025-11-25 11:15:33.846 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:34 compute-0 nova_compute[189381]: 2025-11-25 11:15:34.848 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.195 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.215 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.215 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Triggering sync for uuid dba9274f-6164-41cc-8f4b-870c1cb3f67c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.216 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.216 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.216 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.217 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.248 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:15:35 compute-0 nova_compute[189381]: 2025-11-25 11:15:35.249 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:15:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:15:36.081 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:15:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:15:36.081 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:15:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:15:36.082 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:15:38 compute-0 nova_compute[189381]: 2025-11-25 11:15:38.849 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:39 compute-0 nova_compute[189381]: 2025-11-25 11:15:39.044 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:39 compute-0 nova_compute[189381]: 2025-11-25 11:15:39.851 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:39 compute-0 podman[259959]: 2025-11-25 11:15:39.934539291 +0000 UTC m=+0.056014087 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 25 11:15:42 compute-0 nova_compute[189381]: 2025-11-25 11:15:42.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:43 compute-0 nova_compute[189381]: 2025-11-25 11:15:43.851 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:43 compute-0 podman[259979]: 2025-11-25 11:15:43.954704328 +0000 UTC m=+0.061954848 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 11:15:43 compute-0 podman[259978]: 2025-11-25 11:15:43.968981468 +0000 UTC m=+0.078643607 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, config_id=edpm, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 11:15:44 compute-0 nova_compute[189381]: 2025-11-25 11:15:44.854 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:45 compute-0 podman[260023]: 2025-11-25 11:15:45.949855378 +0000 UTC m=+0.065087578 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:15:46 compute-0 podman[260022]: 2025-11-25 11:15:46.011731123 +0000 UTC m=+0.129467865 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 25 11:15:46 compute-0 nova_compute[189381]: 2025-11-25 11:15:46.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:46 compute-0 nova_compute[189381]: 2025-11-25 11:15:46.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:15:46 compute-0 nova_compute[189381]: 2025-11-25 11:15:46.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:15:46 compute-0 nova_compute[189381]: 2025-11-25 11:15:46.406 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:15:46 compute-0 nova_compute[189381]: 2025-11-25 11:15:46.407 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:15:46 compute-0 nova_compute[189381]: 2025-11-25 11:15:46.407 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:15:46 compute-0 nova_compute[189381]: 2025-11-25 11:15:46.407 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.870 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.890 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.890 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.891 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.916 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.917 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.918 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.918 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:15:47 compute-0 nova_compute[189381]: 2025-11-25 11:15:47.992 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.052 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.052 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.110 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.116 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.184 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.185 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.254 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.558 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.560 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4967MB free_disk=72.07057189941406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.560 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.560 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.738 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.739 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.739 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.739 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:15:48 compute-0 nova_compute[189381]: 2025-11-25 11:15:48.853 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:49 compute-0 nova_compute[189381]: 2025-11-25 11:15:49.084 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:15:49 compute-0 nova_compute[189381]: 2025-11-25 11:15:49.131 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:15:49 compute-0 nova_compute[189381]: 2025-11-25 11:15:49.133 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:15:49 compute-0 nova_compute[189381]: 2025-11-25 11:15:49.133 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:15:49 compute-0 nova_compute[189381]: 2025-11-25 11:15:49.263 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:49 compute-0 nova_compute[189381]: 2025-11-25 11:15:49.264 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:49 compute-0 nova_compute[189381]: 2025-11-25 11:15:49.856 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:50 compute-0 nova_compute[189381]: 2025-11-25 11:15:50.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:50 compute-0 podman[260076]: 2025-11-25 11:15:50.944348246 +0000 UTC m=+0.059536729 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:15:53 compute-0 nova_compute[189381]: 2025-11-25 11:15:53.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:53 compute-0 nova_compute[189381]: 2025-11-25 11:15:53.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:15:53 compute-0 nova_compute[189381]: 2025-11-25 11:15:53.857 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:54 compute-0 nova_compute[189381]: 2025-11-25 11:15:54.860 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:57 compute-0 nova_compute[189381]: 2025-11-25 11:15:57.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:15:58 compute-0 nova_compute[189381]: 2025-11-25 11:15:58.860 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:15:59 compute-0 podman[203557]: time="2025-11-25T11:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:15:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:15:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 25 11:15:59 compute-0 nova_compute[189381]: 2025-11-25 11:15:59.863 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:00 compute-0 podman[260100]: 2025-11-25 11:16:00.945010663 +0000 UTC m=+0.062965577 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 11:16:00 compute-0 podman[260101]: 2025-11-25 11:16:00.979728219 +0000 UTC m=+0.092521685 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 25 11:16:01 compute-0 openstack_network_exporter[205722]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:16:01 compute-0 openstack_network_exporter[205722]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:16:01 compute-0 openstack_network_exporter[205722]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:16:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:16:01 compute-0 openstack_network_exporter[205722]: ERROR   11:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:16:01 compute-0 openstack_network_exporter[205722]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:16:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:16:03 compute-0 nova_compute[189381]: 2025-11-25 11:16:03.862 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:03 compute-0 podman[260141]: 2025-11-25 11:16:03.952705709 +0000 UTC m=+0.067484167 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 25 11:16:04 compute-0 nova_compute[189381]: 2025-11-25 11:16:04.866 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:08 compute-0 nova_compute[189381]: 2025-11-25 11:16:08.865 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:09 compute-0 nova_compute[189381]: 2025-11-25 11:16:09.869 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:10 compute-0 podman[260160]: 2025-11-25 11:16:10.97720812 +0000 UTC m=+0.085032271 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 11:16:13 compute-0 nova_compute[189381]: 2025-11-25 11:16:13.867 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:14 compute-0 podman[260180]: 2025-11-25 11:16:14.738229215 +0000 UTC m=+0.063936025 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:16:14 compute-0 podman[260179]: 2025-11-25 11:16:14.769480511 +0000 UTC m=+0.098364582 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 25 11:16:14 compute-0 nova_compute[189381]: 2025-11-25 11:16:14.872 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:16 compute-0 podman[260224]: 2025-11-25 11:16:16.967003746 +0000 UTC m=+0.077920896 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Nov 25 11:16:17 compute-0 podman[260223]: 2025-11-25 11:16:17.0058096 +0000 UTC m=+0.117151022 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 25 11:16:18 compute-0 nova_compute[189381]: 2025-11-25 11:16:18.868 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:19 compute-0 nova_compute[189381]: 2025-11-25 11:16:19.876 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:21 compute-0 podman[260268]: 2025-11-25 11:16:21.955985491 +0000 UTC m=+0.068612500 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 11:16:23 compute-0 nova_compute[189381]: 2025-11-25 11:16:23.871 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:24 compute-0 nova_compute[189381]: 2025-11-25 11:16:24.881 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:28 compute-0 nova_compute[189381]: 2025-11-25 11:16:28.874 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:29 compute-0 podman[203557]: time="2025-11-25T11:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:16:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:16:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 25 11:16:29 compute-0 nova_compute[189381]: 2025-11-25 11:16:29.886 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:31 compute-0 openstack_network_exporter[205722]: ERROR   11:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:16:31 compute-0 openstack_network_exporter[205722]: ERROR   11:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:16:31 compute-0 openstack_network_exporter[205722]: ERROR   11:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:16:31 compute-0 openstack_network_exporter[205722]: ERROR   11:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:16:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:16:31 compute-0 openstack_network_exporter[205722]: ERROR   11:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:16:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:16:31 compute-0 podman[260292]: 2025-11-25 11:16:31.957269773 +0000 UTC m=+0.064074959 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118)
Nov 25 11:16:31 compute-0 podman[260293]: 2025-11-25 11:16:31.992504823 +0000 UTC m=+0.095463699 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:16:33 compute-0 nova_compute[189381]: 2025-11-25 11:16:33.877 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:34 compute-0 nova_compute[189381]: 2025-11-25 11:16:34.889 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:34 compute-0 podman[260328]: 2025-11-25 11:16:34.958725839 +0000 UTC m=+0.069110002 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public)
Nov 25 11:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:16:36.082 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:16:36.083 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:16:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:16:36.084 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:16:38 compute-0 nova_compute[189381]: 2025-11-25 11:16:38.879 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:39 compute-0 nova_compute[189381]: 2025-11-25 11:16:39.891 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:41 compute-0 nova_compute[189381]: 2025-11-25 11:16:41.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:41 compute-0 podman[260348]: 2025-11-25 11:16:41.953347813 +0000 UTC m=+0.056495822 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 11:16:43 compute-0 nova_compute[189381]: 2025-11-25 11:16:43.881 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:44 compute-0 nova_compute[189381]: 2025-11-25 11:16:44.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:44 compute-0 nova_compute[189381]: 2025-11-25 11:16:44.894 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:44 compute-0 podman[260366]: 2025-11-25 11:16:44.945958157 +0000 UTC m=+0.060662091 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, release=1755695350)
Nov 25 11:16:44 compute-0 podman[260367]: 2025-11-25 11:16:44.948779688 +0000 UTC m=+0.059834947 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:16:46 compute-0 nova_compute[189381]: 2025-11-25 11:16:46.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:46 compute-0 nova_compute[189381]: 2025-11-25 11:16:46.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:16:46 compute-0 nova_compute[189381]: 2025-11-25 11:16:46.545 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:16:46 compute-0 nova_compute[189381]: 2025-11-25 11:16:46.546 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:16:46 compute-0 nova_compute[189381]: 2025-11-25 11:16:46.546 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.938 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updating instance_info_cache with network_info: [{"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.956 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-dba9274f-6164-41cc-8f4b-870c1cb3f67c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.956 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.956 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:47 compute-0 podman[260410]: 2025-11-25 11:16:47.975147069 +0000 UTC m=+0.075661782 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.982 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.984 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.984 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:16:47 compute-0 nova_compute[189381]: 2025-11-25 11:16:47.984 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:16:48 compute-0 podman[260409]: 2025-11-25 11:16:48.024931517 +0000 UTC m=+0.133298755 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.059 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.121 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.123 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.184 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.191 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.253 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.254 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.324 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.703 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.704 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4936MB free_disk=72.07057189941406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.704 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.705 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.783 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.784 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.784 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.784 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.846 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.861 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.862 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.862 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:16:48 compute-0 nova_compute[189381]: 2025-11-25 11:16:48.882 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:49 compute-0 nova_compute[189381]: 2025-11-25 11:16:49.896 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:50 compute-0 nova_compute[189381]: 2025-11-25 11:16:50.927 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:51 compute-0 nova_compute[189381]: 2025-11-25 11:16:51.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:52 compute-0 nova_compute[189381]: 2025-11-25 11:16:52.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:52 compute-0 podman[260469]: 2025-11-25 11:16:52.95446238 +0000 UTC m=+0.061662009 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:16:53 compute-0 nova_compute[189381]: 2025-11-25 11:16:53.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:53 compute-0 nova_compute[189381]: 2025-11-25 11:16:53.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:16:53 compute-0 nova_compute[189381]: 2025-11-25 11:16:53.884 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:54 compute-0 nova_compute[189381]: 2025-11-25 11:16:54.898 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:58 compute-0 nova_compute[189381]: 2025-11-25 11:16:58.886 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:16:59 compute-0 nova_compute[189381]: 2025-11-25 11:16:59.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:16:59 compute-0 podman[203557]: time="2025-11-25T11:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:16:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:16:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 25 11:16:59 compute-0 nova_compute[189381]: 2025-11-25 11:16:59.901 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:01 compute-0 openstack_network_exporter[205722]: ERROR   11:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:17:01 compute-0 openstack_network_exporter[205722]: ERROR   11:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:17:01 compute-0 openstack_network_exporter[205722]: ERROR   11:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:17:01 compute-0 openstack_network_exporter[205722]: ERROR   11:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:17:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:17:01 compute-0 openstack_network_exporter[205722]: ERROR   11:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:17:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:17:02 compute-0 podman[260494]: 2025-11-25 11:17:02.954571278 +0000 UTC m=+0.066661283 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Nov 25 11:17:02 compute-0 podman[260493]: 2025-11-25 11:17:02.993812244 +0000 UTC m=+0.108132543 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.344 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.344 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.352 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'name': 'te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.354 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'name': 'te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx', 'flavor': {'id': 'b7c0626e-febc-4083-b621-6f5ee0740a18', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '62ab6b08-ec10-4838-aa81-24150af36537'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'user_id': '95acdf386c1e42c8a6da1f7b9603054f', 'hostId': '70ac76a5e5a97ee1b0508269f38a8db2fdcc8835aa32624f7b80d162', 'status': 'active', 'metadata': {'metering.server_group': 'f33016ec-000f-44cf-b7cc-2122723ba143'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.354 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-25T11:17:03.355156) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.358 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.361 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.362 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.362 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.363 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-25T11:17:03.362618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.363 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.364 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-25T11:17:03.364063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.382 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/memory.usage volume: 42.421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.401 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/memory.usage volume: 42.65625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.402 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.403 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.403 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-25T11:17:03.403297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.404 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.404 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.405 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.405 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-25T11:17:03.405125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.406 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.407 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-25T11:17:03.407107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.407 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.408 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-25T11:17:03.408717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.408 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/cpu volume: 338960000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.409 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/cpu volume: 336760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.410 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f2408644320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f2408644320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.410 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-25T11:17:03.410352) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.410 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.411 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-25T11:17:03.412155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.424 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.425 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.437 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.437 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.438 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.439 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-25T11:17:03.439170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.474 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 31070720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.474 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.506 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 29710848 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.507 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.507 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-25T11:17:03.508083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.508 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.509 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 1630906369 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.509 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.latency volume: 77005350 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.509 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 1460406255 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.509 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.latency volume: 117578287 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.510 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.511 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-25T11:17:03.510841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.511 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 1136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.511 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.512 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 1068 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.512 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.513 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-25T11:17:03.513272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.513 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.514 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.514 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.514 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.515 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.516 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-25T11:17:03.515687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.516 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.516 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.516 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 73150464 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.517 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.517 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.518 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 11156943053 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-25T11:17:03.518226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.519 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.519 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 3367663373 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.519 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.520 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.520 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-25T11:17:03.520295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.520 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 338 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.521 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-25T11:17:03.521501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.522 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 340 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.522 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.522 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.523 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.523 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.523 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-25T11:17:03.523190) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.523 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.524 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.524 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.525 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 30220288 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.525 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-25T11:17:03.524448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.525 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-25T11:17:03.526322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.527 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.527 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-25T11:17:03.527293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.527 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.528 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-25T11:17:03.528469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.529 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.529 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-25T11:17:03.529701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.530 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.530 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.530 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.531 14 DEBUG ceilometer.compute.pollsters [-] 18a30ced-09e6-4c6a-9ea3-4c59f437a71a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.531 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-25T11:17:03.531092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.531 14 DEBUG ceilometer.compute.pollsters [-] dba9274f-6164-41cc-8f4b-870c1cb3f67c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:17:03.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:17:03 compute-0 nova_compute[189381]: 2025-11-25 11:17:03.887 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:04 compute-0 nova_compute[189381]: 2025-11-25 11:17:04.904 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:05 compute-0 podman[260535]: 2025-11-25 11:17:05.954975395 +0000 UTC m=+0.073848450 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 25 11:17:08 compute-0 nova_compute[189381]: 2025-11-25 11:17:08.890 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:09 compute-0 nova_compute[189381]: 2025-11-25 11:17:09.907 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:12 compute-0 podman[260555]: 2025-11-25 11:17:12.944269706 +0000 UTC m=+0.062373961 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 25 11:17:13 compute-0 nova_compute[189381]: 2025-11-25 11:17:13.892 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:14 compute-0 nova_compute[189381]: 2025-11-25 11:17:14.911 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:15 compute-0 podman[260574]: 2025-11-25 11:17:15.953182006 +0000 UTC m=+0.063733789 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 11:17:15 compute-0 podman[260575]: 2025-11-25 11:17:15.954208776 +0000 UTC m=+0.060765124 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 25 11:17:18 compute-0 nova_compute[189381]: 2025-11-25 11:17:18.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:18 compute-0 nova_compute[189381]: 2025-11-25 11:17:18.894 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:18 compute-0 podman[260616]: 2025-11-25 11:17:18.96603348 +0000 UTC m=+0.072227553 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd)
Nov 25 11:17:18 compute-0 podman[260615]: 2025-11-25 11:17:18.989079481 +0000 UTC m=+0.100002290 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:17:19 compute-0 nova_compute[189381]: 2025-11-25 11:17:19.914 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:23 compute-0 nova_compute[189381]: 2025-11-25 11:17:23.896 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:23 compute-0 podman[260660]: 2025-11-25 11:17:23.948655398 +0000 UTC m=+0.058530220 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:17:24 compute-0 nova_compute[189381]: 2025-11-25 11:17:24.918 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:28 compute-0 nova_compute[189381]: 2025-11-25 11:17:28.898 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:29 compute-0 podman[203557]: time="2025-11-25T11:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:17:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:17:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 25 11:17:29 compute-0 nova_compute[189381]: 2025-11-25 11:17:29.921 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:31 compute-0 openstack_network_exporter[205722]: ERROR   11:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:17:31 compute-0 openstack_network_exporter[205722]: ERROR   11:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:17:31 compute-0 openstack_network_exporter[205722]: ERROR   11:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:17:31 compute-0 openstack_network_exporter[205722]: ERROR   11:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:17:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:17:31 compute-0 openstack_network_exporter[205722]: ERROR   11:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:17:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:17:33 compute-0 nova_compute[189381]: 2025-11-25 11:17:33.902 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:33 compute-0 podman[260685]: 2025-11-25 11:17:33.958849761 +0000 UTC m=+0.073358895 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, io.buildah.version=1.41.4)
Nov 25 11:17:33 compute-0 podman[260686]: 2025-11-25 11:17:33.966021027 +0000 UTC m=+0.072676246 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 25 11:17:34 compute-0 nova_compute[189381]: 2025-11-25 11:17:34.924 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:17:36.083 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:17:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:17:36.083 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:17:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:17:36.084 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:17:36 compute-0 podman[260722]: 2025-11-25 11:17:36.951712953 +0000 UTC m=+0.069299189 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, name=ubi9, version=9.4, io.buildah.version=1.29.0, release-0.7.12=, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 25 11:17:38 compute-0 nova_compute[189381]: 2025-11-25 11:17:38.902 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:39 compute-0 nova_compute[189381]: 2025-11-25 11:17:39.927 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:41 compute-0 nova_compute[189381]: 2025-11-25 11:17:41.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:43 compute-0 nova_compute[189381]: 2025-11-25 11:17:43.905 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:43 compute-0 podman[260741]: 2025-11-25 11:17:43.952882291 +0000 UTC m=+0.059602380 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:17:44 compute-0 nova_compute[189381]: 2025-11-25 11:17:44.931 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:46 compute-0 nova_compute[189381]: 2025-11-25 11:17:46.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:46 compute-0 podman[260760]: 2025-11-25 11:17:46.983645369 +0000 UTC m=+0.083832586 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:17:46 compute-0 podman[260759]: 2025-11-25 11:17:46.992265356 +0000 UTC m=+0.097349464 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, io.openshift.expose-services=)
Nov 25 11:17:47 compute-0 nova_compute[189381]: 2025-11-25 11:17:47.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:47 compute-0 nova_compute[189381]: 2025-11-25 11:17:47.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:17:47 compute-0 nova_compute[189381]: 2025-11-25 11:17:47.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:17:47 compute-0 nova_compute[189381]: 2025-11-25 11:17:47.218 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 25 11:17:47 compute-0 nova_compute[189381]: 2025-11-25 11:17:47.218 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquired lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 25 11:17:47 compute-0 nova_compute[189381]: 2025-11-25 11:17:47.219 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 25 11:17:47 compute-0 nova_compute[189381]: 2025-11-25 11:17:47.219 189385 DEBUG nova.objects.instance [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.901 189385 DEBUG nova.network.neutron [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [{"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.906 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.914 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Releasing lock "refresh_cache-18a30ced-09e6-4c6a-9ea3-4c59f437a71a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.914 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.915 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.945 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.946 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.946 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:17:48 compute-0 nova_compute[189381]: 2025-11-25 11:17:48.947 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.053 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.119 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.121 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.189 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.196 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.263 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.264 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.324 189385 DEBUG oslo_concurrency.processutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.692 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.693 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4905MB free_disk=72.070068359375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.694 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.694 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.805 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.806 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Instance dba9274f-6164-41cc-8f4b-870c1cb3f67c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.806 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.807 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.870 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.895 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.897 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.898 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:17:49 compute-0 nova_compute[189381]: 2025-11-25 11:17:49.933 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:49 compute-0 podman[260816]: 2025-11-25 11:17:49.970167187 +0000 UTC m=+0.072628665 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:17:50 compute-0 podman[260815]: 2025-11-25 11:17:50.000826136 +0000 UTC m=+0.107040831 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 25 11:17:50 compute-0 sshd-session[260800]: Connection closed by authenticating user root 171.244.51.45 port 59212 [preauth]
Nov 25 11:17:52 compute-0 nova_compute[189381]: 2025-11-25 11:17:52.004 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:52 compute-0 nova_compute[189381]: 2025-11-25 11:17:52.004 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:53 compute-0 nova_compute[189381]: 2025-11-25 11:17:53.908 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:54 compute-0 nova_compute[189381]: 2025-11-25 11:17:54.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:54 compute-0 nova_compute[189381]: 2025-11-25 11:17:54.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:17:54 compute-0 nova_compute[189381]: 2025-11-25 11:17:54.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:17:54 compute-0 nova_compute[189381]: 2025-11-25 11:17:54.935 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:54 compute-0 podman[260861]: 2025-11-25 11:17:54.942900799 +0000 UTC m=+0.059652492 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:17:58 compute-0 nova_compute[189381]: 2025-11-25 11:17:58.910 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:17:59 compute-0 podman[203557]: time="2025-11-25T11:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:17:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:17:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 25 11:17:59 compute-0 nova_compute[189381]: 2025-11-25 11:17:59.937 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:01 compute-0 nova_compute[189381]: 2025-11-25 11:18:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:01 compute-0 openstack_network_exporter[205722]: ERROR   11:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:18:01 compute-0 openstack_network_exporter[205722]: ERROR   11:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:18:01 compute-0 openstack_network_exporter[205722]: ERROR   11:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:18:01 compute-0 openstack_network_exporter[205722]: ERROR   11:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:18:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:18:01 compute-0 openstack_network_exporter[205722]: ERROR   11:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:18:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:18:03 compute-0 nova_compute[189381]: 2025-11-25 11:18:03.911 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:04 compute-0 nova_compute[189381]: 2025-11-25 11:18:04.940 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:04 compute-0 podman[260885]: 2025-11-25 11:18:04.961443661 +0000 UTC m=+0.070626917 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:18:04 compute-0 podman[260884]: 2025-11-25 11:18:04.997677671 +0000 UTC m=+0.109377709 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 25 11:18:07 compute-0 podman[260920]: 2025-11-25 11:18:07.956203698 +0000 UTC m=+0.067981431 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 25 11:18:08 compute-0 nova_compute[189381]: 2025-11-25 11:18:08.914 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:09 compute-0 nova_compute[189381]: 2025-11-25 11:18:09.942 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:13 compute-0 nova_compute[189381]: 2025-11-25 11:18:13.916 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:14 compute-0 podman[260940]: 2025-11-25 11:18:14.767097309 +0000 UTC m=+0.092985358 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 25 11:18:14 compute-0 nova_compute[189381]: 2025-11-25 11:18:14.944 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:17 compute-0 podman[260958]: 2025-11-25 11:18:17.96918325 +0000 UTC m=+0.083451215 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 25 11:18:17 compute-0 podman[260959]: 2025-11-25 11:18:17.993807507 +0000 UTC m=+0.101758100 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:18:18 compute-0 nova_compute[189381]: 2025-11-25 11:18:18.918 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.732 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.732 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.733 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.733 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.734 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.735 189385 INFO nova.compute.manager [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Terminating instance
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.736 189385 DEBUG nova.compute.manager [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:18:19 compute-0 kernel: tap6ed45132-26 (unregistering): left promiscuous mode
Nov 25 11:18:19 compute-0 NetworkManager[56317]: <info>  [1764069499.8843] device (tap6ed45132-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:18:19 compute-0 ovn_controller[97779]: 2025-11-25T11:18:19Z|00201|binding|INFO|Releasing lport 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 from this chassis (sb_readonly=0)
Nov 25 11:18:19 compute-0 ovn_controller[97779]: 2025-11-25T11:18:19Z|00202|binding|INFO|Setting lport 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 down in Southbound
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.905 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:19 compute-0 ovn_controller[97779]: 2025-11-25T11:18:19Z|00203|binding|INFO|Removing iface tap6ed45132-26 ovn-installed in OVS
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.907 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.918 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:19 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 25 11:18:19 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 6min 56.073s CPU time.
Nov 25 11:18:19 compute-0 systemd-machined[155706]: Machine qemu-11-instance-0000000a terminated.
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.946 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.965 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:19 compute-0 nova_compute[189381]: 2025-11-25 11:18:19.971 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.008 189385 INFO nova.virt.libvirt.driver [-] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Instance destroyed successfully.
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.009 189385 DEBUG nova.objects.instance [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lazy-loading 'resources' on Instance uuid 18a30ced-09e6-4c6a-9ea3-4c59f437a71a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.020 189385 DEBUG nova.virt.libvirt.vif [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:04:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0798672-asg-2iigtlngwuwp-527gobor6svh-sdnl3i3yrpw4',id=10,image_ref='62ab6b08-ec10-4838-aa81-24150af36537',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:04:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f33016ec-000f-44cf-b7cc-2122723ba143'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d057fe4d034a4f13b6e08dc8083cad5b',ramdisk_id='',reservation_id='r-3jhjkex5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='62ab6b08-ec10-4838-aa81-24150af36537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1327093183',owner_user_name='tempest-PrometheusGabbiTest-1327093183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:04:56Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='95acdf386c1e42c8a6da1f7b9603054f',uuid=18a30ced-09e6-4c6a-9ea3-4c59f437a71a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.021 189385 DEBUG nova.network.os_vif_util [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converting VIF {"id": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "address": "fa:16:3e:fd:bc:05", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ed45132-26", "ovs_interfaceid": "6ed45132-26d0-4000-b0b9-bb7c45ac85f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.021 189385 DEBUG nova.network.os_vif_util [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:bc:05,bridge_name='br-int',has_traffic_filtering=True,id=6ed45132-26d0-4000-b0b9-bb7c45ac85f7,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed45132-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.022 189385 DEBUG os_vif [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:bc:05,bridge_name='br-int',has_traffic_filtering=True,id=6ed45132-26d0-4000-b0b9-bb7c45ac85f7,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed45132-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.023 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.023 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ed45132-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.025 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.027 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.027 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.030 189385 INFO os_vif [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:bc:05,bridge_name='br-int',has_traffic_filtering=True,id=6ed45132-26d0-4000-b0b9-bb7c45ac85f7,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ed45132-26')
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.031 189385 INFO nova.virt.libvirt.driver [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Deleting instance files /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a_del
Nov 25 11:18:20 compute-0 nova_compute[189381]: 2025-11-25 11:18:20.031 189385 INFO nova.virt.libvirt.driver [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Deletion of /var/lib/nova/instances/18a30ced-09e6-4c6a-9ea3-4c59f437a71a_del complete
Nov 25 11:18:20 compute-0 podman[261029]: 2025-11-25 11:18:20.132880245 +0000 UTC m=+0.105424265 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:18:20 compute-0 podman[261031]: 2025-11-25 11:18:20.137251381 +0000 UTC m=+0.109799631 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.876 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:bc:05 10.100.2.10'], port_security=['fa:16:3e:fd:bc:05 10.100.2.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.10/16', 'neutron:device_id': '18a30ced-09e6-4c6a-9ea3-4c59f437a71a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dd922d1-432e-41c0-9438-975e4d0bc760', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da371dea-a01c-4170-8065-7d1b11a4ac95, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=6ed45132-26d0-4000-b0b9-bb7c45ac85f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.878 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 6ed45132-26d0-4000-b0b9-bb7c45ac85f7 in datapath a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 unbound from our chassis
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.879 106634 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.895 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[28f42039-6452-44b0-961c-f0a4bdb456db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.928 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[aae3dbd9-92eb-459d-9a53-f34a4aca7b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.932 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[a70a1f48-9cbd-4109-b515-8b56792461c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.963 239638 DEBUG oslo.privsep.daemon [-] privsep: reply[7be3a222-d183-41db-a0ac-109ee08a6f07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.981 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[913c1950-3bcf-438b-bde4-aa28068f1ff4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa82a38fb-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:c9:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 559003, 'reachable_time': 28354, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261079, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.997 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[8f478d6c-2922-4e42-b901-75bff2ae89cb]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapa82a38fb-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559013, 'tstamp': 559013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261080, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapa82a38fb-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 559017, 'tstamp': 559017}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261080, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:20 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:20.999 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa82a38fb-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:18:21 compute-0 nova_compute[189381]: 2025-11-25 11:18:21.001 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:21.002 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa82a38fb-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:18:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:21.003 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:18:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:21.003 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa82a38fb-80, col_values=(('external_ids', {'iface-id': '915e80eb-5def-4cf6-b65e-79eab93b7232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:18:21 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:21.004 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 25 11:18:21 compute-0 nova_compute[189381]: 2025-11-25 11:18:21.818 189385 INFO nova.compute.manager [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Took 2.08 seconds to destroy the instance on the hypervisor.
Nov 25 11:18:21 compute-0 nova_compute[189381]: 2025-11-25 11:18:21.819 189385 DEBUG oslo.service.loopingcall [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:18:21 compute-0 nova_compute[189381]: 2025-11-25 11:18:21.819 189385 DEBUG nova.compute.manager [-] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:18:21 compute-0 nova_compute[189381]: 2025-11-25 11:18:21.820 189385 DEBUG nova.network.neutron [-] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:18:23 compute-0 nova_compute[189381]: 2025-11-25 11:18:23.921 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:24 compute-0 nova_compute[189381]: 2025-11-25 11:18:24.524 189385 DEBUG nova.compute.manager [req-30879207-e112-4317-9510-ce161af23608 req-70ac811e-8dfb-4955-bd40-efdf63386f65 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received event network-vif-unplugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:18:24 compute-0 nova_compute[189381]: 2025-11-25 11:18:24.525 189385 DEBUG oslo_concurrency.lockutils [req-30879207-e112-4317-9510-ce161af23608 req-70ac811e-8dfb-4955-bd40-efdf63386f65 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:24 compute-0 nova_compute[189381]: 2025-11-25 11:18:24.526 189385 DEBUG oslo_concurrency.lockutils [req-30879207-e112-4317-9510-ce161af23608 req-70ac811e-8dfb-4955-bd40-efdf63386f65 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:24 compute-0 nova_compute[189381]: 2025-11-25 11:18:24.527 189385 DEBUG oslo_concurrency.lockutils [req-30879207-e112-4317-9510-ce161af23608 req-70ac811e-8dfb-4955-bd40-efdf63386f65 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:24 compute-0 nova_compute[189381]: 2025-11-25 11:18:24.527 189385 DEBUG nova.compute.manager [req-30879207-e112-4317-9510-ce161af23608 req-70ac811e-8dfb-4955-bd40-efdf63386f65 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] No waiting events found dispatching network-vif-unplugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:18:24 compute-0 nova_compute[189381]: 2025-11-25 11:18:24.528 189385 DEBUG nova.compute.manager [req-30879207-e112-4317-9510-ce161af23608 req-70ac811e-8dfb-4955-bd40-efdf63386f65 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received event network-vif-unplugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.027 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.143 189385 DEBUG nova.network.neutron [-] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.378 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:25 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:25.378 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fe:9c:2b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '7a:4f:a0:37:9e:7b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:18:25 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:25.379 106634 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.450 189385 INFO nova.compute.manager [-] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Took 3.63 seconds to deallocate network for instance.
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.640 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.641 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.722 189385 DEBUG nova.compute.provider_tree [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.735 189385 DEBUG nova.scheduler.client.report [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.860 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:25 compute-0 nova_compute[189381]: 2025-11-25 11:18:25.950 189385 INFO nova.scheduler.client.report [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Deleted allocations for instance 18a30ced-09e6-4c6a-9ea3-4c59f437a71a
Nov 25 11:18:25 compute-0 podman[261081]: 2025-11-25 11:18:25.972932388 +0000 UTC m=+0.080460448 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.266 189385 DEBUG oslo_concurrency.lockutils [None req-5f95ca11-d271-4949-a144-68bea8833edf 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:26 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:26.381 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3fcb3423-a4d5-4f72-950c-307893e4a985, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.653 189385 DEBUG nova.compute.manager [req-d46b648a-ab5d-4744-8cef-4624503d13c1 req-f6b12112-ffe8-4f37-a759-615c2172d092 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received event network-vif-deleted-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.654 189385 DEBUG nova.compute.manager [req-d46b648a-ab5d-4744-8cef-4624503d13c1 req-f6b12112-ffe8-4f37-a759-615c2172d092 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received event network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.654 189385 DEBUG oslo_concurrency.lockutils [req-d46b648a-ab5d-4744-8cef-4624503d13c1 req-f6b12112-ffe8-4f37-a759-615c2172d092 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.654 189385 DEBUG oslo_concurrency.lockutils [req-d46b648a-ab5d-4744-8cef-4624503d13c1 req-f6b12112-ffe8-4f37-a759-615c2172d092 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.655 189385 DEBUG oslo_concurrency.lockutils [req-d46b648a-ab5d-4744-8cef-4624503d13c1 req-f6b12112-ffe8-4f37-a759-615c2172d092 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "18a30ced-09e6-4c6a-9ea3-4c59f437a71a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.655 189385 DEBUG nova.compute.manager [req-d46b648a-ab5d-4744-8cef-4624503d13c1 req-f6b12112-ffe8-4f37-a759-615c2172d092 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] No waiting events found dispatching network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:18:26 compute-0 nova_compute[189381]: 2025-11-25 11:18:26.655 189385 WARNING nova.compute.manager [req-d46b648a-ab5d-4744-8cef-4624503d13c1 req-f6b12112-ffe8-4f37-a759-615c2172d092 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Received unexpected event network-vif-plugged-6ed45132-26d0-4000-b0b9-bb7c45ac85f7 for instance with vm_state deleted and task_state None.
Nov 25 11:18:28 compute-0 nova_compute[189381]: 2025-11-25 11:18:28.923 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:29 compute-0 podman[203557]: time="2025-11-25T11:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:18:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Nov 25 11:18:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 25 11:18:30 compute-0 nova_compute[189381]: 2025-11-25 11:18:30.030 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:31 compute-0 openstack_network_exporter[205722]: ERROR   11:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:18:31 compute-0 openstack_network_exporter[205722]: ERROR   11:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:18:31 compute-0 openstack_network_exporter[205722]: ERROR   11:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:18:31 compute-0 openstack_network_exporter[205722]: ERROR   11:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:18:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:18:31 compute-0 openstack_network_exporter[205722]: ERROR   11:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:18:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:18:33 compute-0 nova_compute[189381]: 2025-11-25 11:18:33.932 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:35 compute-0 nova_compute[189381]: 2025-11-25 11:18:35.005 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764069500.003394, 18a30ced-09e6-4c6a-9ea3-4c59f437a71a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:18:35 compute-0 nova_compute[189381]: 2025-11-25 11:18:35.005 189385 INFO nova.compute.manager [-] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] VM Stopped (Lifecycle Event)
Nov 25 11:18:35 compute-0 nova_compute[189381]: 2025-11-25 11:18:35.027 189385 DEBUG nova.compute.manager [None req-a3e4408c-fc47-4e62-b63b-80c5e4f69bab - - - - - -] [instance: 18a30ced-09e6-4c6a-9ea3-4c59f437a71a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:18:35 compute-0 nova_compute[189381]: 2025-11-25 11:18:35.032 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:35 compute-0 podman[261106]: 2025-11-25 11:18:35.965024636 +0000 UTC m=+0.071805401 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 11:18:35 compute-0 podman[261107]: 2025-11-25 11:18:35.964861892 +0000 UTC m=+0.071149562 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 25 11:18:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:36.085 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:36.086 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:36.087 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:38 compute-0 nova_compute[189381]: 2025-11-25 11:18:38.934 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:38 compute-0 podman[261143]: 2025-11-25 11:18:38.978497307 +0000 UTC m=+0.092302439 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9, config_id=edpm, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.286 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.287 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.287 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.289 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.289 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.291 189385 INFO nova.compute.manager [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Terminating instance
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.292 189385 DEBUG nova.compute.manager [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 25 11:18:39 compute-0 kernel: tap00b30981-59 (unregistering): left promiscuous mode
Nov 25 11:18:39 compute-0 NetworkManager[56317]: <info>  [1764069519.3271] device (tap00b30981-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.337 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00204|binding|INFO|Releasing lport 00b30981-5989-421b-9886-4a0d1020874c from this chassis (sb_readonly=0)
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00205|binding|INFO|Setting lport 00b30981-5989-421b-9886-4a0d1020874c down in Southbound
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00206|binding|INFO|Removing iface tap00b30981-59 ovn-installed in OVS
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.357 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 25 11:18:39 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 32.310s CPU time.
Nov 25 11:18:39 compute-0 systemd-machined[155706]: Machine qemu-16-instance-0000000f terminated.
Nov 25 11:18:39 compute-0 kernel: tap00b30981-59: entered promiscuous mode
Nov 25 11:18:39 compute-0 kernel: tap00b30981-59 (unregistering): left promiscuous mode
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.531 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00207|if_status|INFO|Dropped 3 log messages in last 691 seconds (most recently, 691 seconds ago) due to excessive rate
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00208|if_status|INFO|Not updating pb chassis for 00b30981-5989-421b-9886-4a0d1020874c now as sb is readonly
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00209|binding|INFO|Claiming lport 00b30981-5989-421b-9886-4a0d1020874c for this chassis.
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00210|binding|INFO|00b30981-5989-421b-9886-4a0d1020874c: Claiming fa:16:3e:93:2c:2e 10.100.0.181
Nov 25 11:18:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:39.542 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2c:2e 10.100.0.181'], port_security=['fa:16:3e:93:2c:2e 10.100.0.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.181/16', 'neutron:device_id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dd922d1-432e-41c0-9438-975e4d0bc760', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da371dea-a01c-4170-8065-7d1b11a4ac95, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=00b30981-5989-421b-9886-4a0d1020874c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:18:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:39.543 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 00b30981-5989-421b-9886-4a0d1020874c in datapath a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 unbound from our chassis
Nov 25 11:18:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:39.544 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:18:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:39.546 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[220d46f9-0f79-453f-b9c3-1f5db296e231]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:39.546 106634 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 namespace which is not needed anymore
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00211|binding|INFO|Setting lport 00b30981-5989-421b-9886-4a0d1020874c ovn-installed in OVS
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00212|if_status|INFO|Dropped 3 log messages in last 691 seconds (most recently, 691 seconds ago) due to excessive rate
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00213|if_status|INFO|Not setting lport 00b30981-5989-421b-9886-4a0d1020874c down as sb is readonly
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.557 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 ovn_controller[97779]: 2025-11-25T11:18:39Z|00214|binding|INFO|Releasing lport 00b30981-5989-421b-9886-4a0d1020874c from this chassis (sb_readonly=0)
Nov 25 11:18:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:39.565 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2c:2e 10.100.0.181'], port_security=['fa:16:3e:93:2c:2e 10.100.0.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.181/16', 'neutron:device_id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dd922d1-432e-41c0-9438-975e4d0bc760', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da371dea-a01c-4170-8065-7d1b11a4ac95, chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=00b30981-5989-421b-9886-4a0d1020874c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.570 189385 INFO nova.virt.libvirt.driver [-] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Instance destroyed successfully.
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.571 189385 DEBUG nova.objects.instance [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lazy-loading 'resources' on Instance uuid dba9274f-6164-41cc-8f4b-870c1cb3f67c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.575 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.585 189385 DEBUG nova.virt.libvirt.vif [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-25T11:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-0798672-asg-2iigtlngwuwp-6sxipnwxppgu-5vntbjofj5kx',id=15,image_ref='62ab6b08-ec10-4838-aa81-24150af36537',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-25T11:08:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='f33016ec-000f-44cf-b7cc-2122723ba143'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d057fe4d034a4f13b6e08dc8083cad5b',ramdisk_id='',reservation_id='r-fc0lq6tm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='62ab6b08-ec10-4838-aa81-24150af36537',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1327093183',owner_user_name='tempest-PrometheusGabbiTest-1327093183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-25T11:08:36Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='95acdf386c1e42c8a6da1f7b9603054f',uuid=dba9274f-6164-41cc-8f4b-870c1cb3f67c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.585 189385 DEBUG nova.network.os_vif_util [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converting VIF {"id": "00b30981-5989-421b-9886-4a0d1020874c", "address": "fa:16:3e:93:2c:2e", "network": {"id": "a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d057fe4d034a4f13b6e08dc8083cad5b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00b30981-59", "ovs_interfaceid": "00b30981-5989-421b-9886-4a0d1020874c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.586 189385 DEBUG nova.network.os_vif_util [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:93:2c:2e,bridge_name='br-int',has_traffic_filtering=True,id=00b30981-5989-421b-9886-4a0d1020874c,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b30981-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.586 189385 DEBUG os_vif [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:2c:2e,bridge_name='br-int',has_traffic_filtering=True,id=00b30981-5989-421b-9886-4a0d1020874c,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b30981-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.588 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.588 189385 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00b30981-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.589 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.591 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.593 189385 INFO os_vif [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:93:2c:2e,bridge_name='br-int',has_traffic_filtering=True,id=00b30981-5989-421b-9886-4a0d1020874c,network=Network(a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00b30981-59')
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.594 189385 INFO nova.virt.libvirt.driver [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Deleting instance files /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c_del
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.594 189385 INFO nova.virt.libvirt.driver [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Deletion of /var/lib/nova/instances/dba9274f-6164-41cc-8f4b-870c1cb3f67c_del complete
Nov 25 11:18:39 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:39.603 106634 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:93:2c:2e 10.100.0.181'], port_security=['fa:16:3e:93:2c:2e 10.100.0.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.181/16', 'neutron:device_id': 'dba9274f-6164-41cc-8f4b-870c1cb3f67c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd057fe4d034a4f13b6e08dc8083cad5b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6dd922d1-432e-41c0-9438-975e4d0bc760', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da371dea-a01c-4170-8065-7d1b11a4ac95, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7efe86320760>], logical_port=00b30981-5989-421b-9886-4a0d1020874c) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7efe86320760>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 25 11:18:39 compute-0 neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5[254944]: [NOTICE]   (254961) : haproxy version is 2.8.14-c23fe91
Nov 25 11:18:39 compute-0 neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5[254944]: [NOTICE]   (254961) : path to executable is /usr/sbin/haproxy
Nov 25 11:18:39 compute-0 neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5[254944]: [WARNING]  (254961) : Exiting Master process...
Nov 25 11:18:39 compute-0 neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5[254944]: [ALERT]    (254961) : Current worker (254967) exited with code 143 (Terminated)
Nov 25 11:18:39 compute-0 neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5[254944]: [WARNING]  (254961) : All workers exited. Exiting... (0)
Nov 25 11:18:39 compute-0 systemd[1]: libpod-97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066.scope: Deactivated successfully.
Nov 25 11:18:39 compute-0 podman[261204]: 2025-11-25 11:18:39.775612772 +0000 UTC m=+0.123788192 container died 97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.777 189385 INFO nova.compute.manager [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Took 0.48 seconds to destroy the instance on the hypervisor.
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.778 189385 DEBUG oslo.service.loopingcall [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.779 189385 DEBUG nova.compute.manager [-] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 25 11:18:39 compute-0 nova_compute[189381]: 2025-11-25 11:18:39.779 189385 DEBUG nova.network.neutron [-] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 25 11:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066-userdata-shm.mount: Deactivated successfully.
Nov 25 11:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-bab000c5d34714a0115083fdf3255d3c45c583d1ff6fc38609ff6d7643317f69-merged.mount: Deactivated successfully.
Nov 25 11:18:40 compute-0 podman[261204]: 2025-11-25 11:18:40.15238944 +0000 UTC m=+0.500564860 container cleanup 97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 25 11:18:40 compute-0 systemd[1]: libpod-conmon-97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066.scope: Deactivated successfully.
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.567 189385 DEBUG nova.compute.manager [req-7e80b65c-3d78-4de6-8975-2b7394f55fc0 req-f3ccdd13-129e-4d80-8bb8-7c09f17b3f8a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received event network-vif-unplugged-00b30981-5989-421b-9886-4a0d1020874c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.567 189385 DEBUG oslo_concurrency.lockutils [req-7e80b65c-3d78-4de6-8975-2b7394f55fc0 req-f3ccdd13-129e-4d80-8bb8-7c09f17b3f8a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.568 189385 DEBUG oslo_concurrency.lockutils [req-7e80b65c-3d78-4de6-8975-2b7394f55fc0 req-f3ccdd13-129e-4d80-8bb8-7c09f17b3f8a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.568 189385 DEBUG oslo_concurrency.lockutils [req-7e80b65c-3d78-4de6-8975-2b7394f55fc0 req-f3ccdd13-129e-4d80-8bb8-7c09f17b3f8a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.568 189385 DEBUG nova.compute.manager [req-7e80b65c-3d78-4de6-8975-2b7394f55fc0 req-f3ccdd13-129e-4d80-8bb8-7c09f17b3f8a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] No waiting events found dispatching network-vif-unplugged-00b30981-5989-421b-9886-4a0d1020874c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.568 189385 DEBUG nova.compute.manager [req-7e80b65c-3d78-4de6-8975-2b7394f55fc0 req-f3ccdd13-129e-4d80-8bb8-7c09f17b3f8a d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received event network-vif-unplugged-00b30981-5989-421b-9886-4a0d1020874c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 25 11:18:40 compute-0 podman[261231]: 2025-11-25 11:18:40.679294022 +0000 UTC m=+0.498913720 container remove 97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.688 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[e1413950-9090-4557-b9bf-c2c76c67f2d0]: (4, ('Tue Nov 25 11:18:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 (97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066)\n97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066\nTue Nov 25 11:18:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 (97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066)\n97e88b9a5e6ab39e4e57da6278b8a8e63595cef0bba4b52f693cb94680f40066\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.690 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[d725bc6e-5bd7-4ac3-9a9e-10d7a45f030f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.691 106634 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa82a38fb-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.694 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:40 compute-0 kernel: tapa82a38fb-80: left promiscuous mode
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.706 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:40 compute-0 nova_compute[189381]: 2025-11-25 11:18:40.707 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.710 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[a55a39d8-9816-4330-8306-2ad5b9b8e189]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.732 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd0457b-7a75-462d-86a0-d507909e12f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.734 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[2173866f-72d6-4590-822e-5827efe2ce02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.749 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[518ce002-0e00-42d6-987a-41595ae391a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 558996, 'reachable_time': 20745, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261245, 'error': None, 'target': 'ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.752 106746 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.752 106746 DEBUG oslo.privsep.daemon [-] privsep: reply[a12f4768-aa86-4b12-9b05-b743bc1e2fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 systemd[1]: run-netns-ovnmeta\x2da82a38fb\x2d8be2\x2d4a9c\x2d9a85\x2dff991bc0b1e5.mount: Deactivated successfully.
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.754 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 00b30981-5989-421b-9886-4a0d1020874c in datapath a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 unbound from our chassis
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.755 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.757 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[150070aa-fac1-4bb0-92be-0564b88cc979]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.757 106634 INFO neutron.agent.ovn.metadata.agent [-] Port 00b30981-5989-421b-9886-4a0d1020874c in datapath a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5 unbound from our chassis
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.758 106634 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a82a38fb-8be2-4a9c-9a85-ff991bc0b1e5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 25 11:18:40 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:18:40.759 239582 DEBUG oslo.privsep.daemon [-] privsep: reply[b021da0a-1c29-4617-b3c2-b04468ef63b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 25 11:18:41 compute-0 nova_compute[189381]: 2025-11-25 11:18:41.211 189385 DEBUG nova.network.neutron [-] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 25 11:18:41 compute-0 nova_compute[189381]: 2025-11-25 11:18:41.247 189385 INFO nova.compute.manager [-] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Took 1.47 seconds to deallocate network for instance.
Nov 25 11:18:41 compute-0 nova_compute[189381]: 2025-11-25 11:18:41.349 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:41 compute-0 nova_compute[189381]: 2025-11-25 11:18:41.350 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:41 compute-0 nova_compute[189381]: 2025-11-25 11:18:41.414 189385 DEBUG nova.compute.provider_tree [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:18:41 compute-0 nova_compute[189381]: 2025-11-25 11:18:41.429 189385 DEBUG nova.scheduler.client.report [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:18:41 compute-0 nova_compute[189381]: 2025-11-25 11:18:41.512 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.063 189385 INFO nova.scheduler.client.report [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Deleted allocations for instance dba9274f-6164-41cc-8f4b-870c1cb3f67c
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.129 189385 DEBUG oslo_concurrency.lockutils [None req-7f651315-f8c4-49ca-8eab-6490d9234382 95acdf386c1e42c8a6da1f7b9603054f d057fe4d034a4f13b6e08dc8083cad5b - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.880 189385 DEBUG nova.compute.manager [req-e009cbe4-db2f-49f9-b0bd-21bc7b4dd0ca req-8b06a0fa-6b8e-4640-875e-acf62711c088 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received event network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.880 189385 DEBUG oslo_concurrency.lockutils [req-e009cbe4-db2f-49f9-b0bd-21bc7b4dd0ca req-8b06a0fa-6b8e-4640-875e-acf62711c088 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Acquiring lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.880 189385 DEBUG oslo_concurrency.lockutils [req-e009cbe4-db2f-49f9-b0bd-21bc7b4dd0ca req-8b06a0fa-6b8e-4640-875e-acf62711c088 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.880 189385 DEBUG oslo_concurrency.lockutils [req-e009cbe4-db2f-49f9-b0bd-21bc7b4dd0ca req-8b06a0fa-6b8e-4640-875e-acf62711c088 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] Lock "dba9274f-6164-41cc-8f4b-870c1cb3f67c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.881 189385 DEBUG nova.compute.manager [req-e009cbe4-db2f-49f9-b0bd-21bc7b4dd0ca req-8b06a0fa-6b8e-4640-875e-acf62711c088 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] No waiting events found dispatching network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.881 189385 WARNING nova.compute.manager [req-e009cbe4-db2f-49f9-b0bd-21bc7b4dd0ca req-8b06a0fa-6b8e-4640-875e-acf62711c088 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received unexpected event network-vif-plugged-00b30981-5989-421b-9886-4a0d1020874c for instance with vm_state deleted and task_state None.
Nov 25 11:18:42 compute-0 nova_compute[189381]: 2025-11-25 11:18:42.881 189385 DEBUG nova.compute.manager [req-e009cbe4-db2f-49f9-b0bd-21bc7b4dd0ca req-8b06a0fa-6b8e-4640-875e-acf62711c088 d54b4b5c98d6430d9595a319959e6780 2667e38ebece4675a23dca6c850c9f1e - - default default] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Received event network-vif-deleted-00b30981-5989-421b-9886-4a0d1020874c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 25 11:18:43 compute-0 nova_compute[189381]: 2025-11-25 11:18:43.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:43 compute-0 nova_compute[189381]: 2025-11-25 11:18:43.936 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:44 compute-0 nova_compute[189381]: 2025-11-25 11:18:44.591 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:44 compute-0 podman[261246]: 2025-11-25 11:18:44.967419179 +0000 UTC m=+0.072961214 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 25 11:18:47 compute-0 nova_compute[189381]: 2025-11-25 11:18:47.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:47 compute-0 nova_compute[189381]: 2025-11-25 11:18:47.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:18:47 compute-0 nova_compute[189381]: 2025-11-25 11:18:47.048 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 11:18:47 compute-0 nova_compute[189381]: 2025-11-25 11:18:47.049 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.025 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.233 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.234 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.234 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.234 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:18:48 compute-0 podman[261267]: 2025-11-25 11:18:48.361121838 +0000 UTC m=+0.077371180 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 11:18:48 compute-0 podman[261266]: 2025-11-25 11:18:48.376148839 +0000 UTC m=+0.095999285 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git)
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.584 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.586 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5350MB free_disk=72.12825775146484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.586 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.587 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.664 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.665 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.857 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing inventories for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.875 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating ProviderTree inventory for provider a660730c-fa97-4a71-acf8-b1f3eef924ba from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.876 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Updating inventory in ProviderTree for provider a660730c-fa97-4a71-acf8-b1f3eef924ba with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.895 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing aggregate associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.918 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Refreshing trait associations for resource provider a660730c-fa97-4a71-acf8-b1f3eef924ba, traits: HW_CPU_X86_SSE,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_FMA3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AESNI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SHA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE4A,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.938 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.945 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:18:48 compute-0 nova_compute[189381]: 2025-11-25 11:18:48.969 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:18:49 compute-0 nova_compute[189381]: 2025-11-25 11:18:49.144 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:18:49 compute-0 nova_compute[189381]: 2025-11-25 11:18:49.145 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:18:49 compute-0 nova_compute[189381]: 2025-11-25 11:18:49.595 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:50 compute-0 podman[261310]: 2025-11-25 11:18:50.988174086 +0000 UTC m=+0.091561437 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 25 11:18:51 compute-0 podman[261309]: 2025-11-25 11:18:51.038715776 +0000 UTC m=+0.149528050 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller)
Nov 25 11:18:53 compute-0 nova_compute[189381]: 2025-11-25 11:18:53.145 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:53 compute-0 nova_compute[189381]: 2025-11-25 11:18:53.145 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:53 compute-0 nova_compute[189381]: 2025-11-25 11:18:53.942 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:54 compute-0 nova_compute[189381]: 2025-11-25 11:18:54.568 189385 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764069519.5676646, dba9274f-6164-41cc-8f4b-870c1cb3f67c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 25 11:18:54 compute-0 nova_compute[189381]: 2025-11-25 11:18:54.569 189385 INFO nova.compute.manager [-] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] VM Stopped (Lifecycle Event)
Nov 25 11:18:54 compute-0 nova_compute[189381]: 2025-11-25 11:18:54.598 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:54 compute-0 nova_compute[189381]: 2025-11-25 11:18:54.607 189385 DEBUG nova.compute.manager [None req-fac581ab-a033-4925-88b8-110ff3fbe7eb - - - - - -] [instance: dba9274f-6164-41cc-8f4b-870c1cb3f67c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 25 11:18:56 compute-0 nova_compute[189381]: 2025-11-25 11:18:56.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:56 compute-0 nova_compute[189381]: 2025-11-25 11:18:56.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:18:56 compute-0 nova_compute[189381]: 2025-11-25 11:18:56.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:18:56 compute-0 podman[261351]: 2025-11-25 11:18:56.991590257 +0000 UTC m=+0.098903728 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 25 11:18:58 compute-0 nova_compute[189381]: 2025-11-25 11:18:58.943 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:59 compute-0 nova_compute[189381]: 2025-11-25 11:18:59.601 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:18:59 compute-0 podman[203557]: time="2025-11-25T11:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:18:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:18:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Nov 25 11:19:00 compute-0 nova_compute[189381]: 2025-11-25 11:19:00.854 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:01 compute-0 openstack_network_exporter[205722]: ERROR   11:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:19:01 compute-0 openstack_network_exporter[205722]: ERROR   11:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:19:01 compute-0 openstack_network_exporter[205722]: ERROR   11:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:19:01 compute-0 openstack_network_exporter[205722]: ERROR   11:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:19:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:19:01 compute-0 openstack_network_exporter[205722]: ERROR   11:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:19:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:19:03 compute-0 nova_compute[189381]: 2025-11-25 11:19:03.023 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.344 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.345 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f24081adf10>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:19:03.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:19:03 compute-0 nova_compute[189381]: 2025-11-25 11:19:03.944 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:04 compute-0 nova_compute[189381]: 2025-11-25 11:19:04.604 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:06 compute-0 podman[261375]: 2025-11-25 11:19:06.966383157 +0000 UTC m=+0.081012985 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute)
Nov 25 11:19:06 compute-0 podman[261376]: 2025-11-25 11:19:06.976104966 +0000 UTC m=+0.086223564 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:19:08 compute-0 nova_compute[189381]: 2025-11-25 11:19:08.946 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:09 compute-0 nova_compute[189381]: 2025-11-25 11:19:09.608 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:09 compute-0 podman[261414]: 2025-11-25 11:19:09.944467634 +0000 UTC m=+0.062647138 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, version=9.4, release=1214.1726694543, config_id=edpm, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0)
Nov 25 11:19:13 compute-0 nova_compute[189381]: 2025-11-25 11:19:13.949 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:14 compute-0 nova_compute[189381]: 2025-11-25 11:19:14.612 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:15 compute-0 podman[261434]: 2025-11-25 11:19:15.945409444 +0000 UTC m=+0.058926801 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 11:19:18 compute-0 podman[261453]: 2025-11-25 11:19:18.951670529 +0000 UTC m=+0.064230254 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:19:18 compute-0 nova_compute[189381]: 2025-11-25 11:19:18.952 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:18 compute-0 podman[261452]: 2025-11-25 11:19:18.957124575 +0000 UTC m=+0.069552876 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, vcs-type=git, name=ubi9-minimal)
Nov 25 11:19:19 compute-0 nova_compute[189381]: 2025-11-25 11:19:19.615 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:21 compute-0 podman[261495]: 2025-11-25 11:19:21.962823044 +0000 UTC m=+0.075382023 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 25 11:19:22 compute-0 podman[261494]: 2025-11-25 11:19:22.022911488 +0000 UTC m=+0.137713521 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 25 11:19:23 compute-0 nova_compute[189381]: 2025-11-25 11:19:23.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:23 compute-0 nova_compute[189381]: 2025-11-25 11:19:23.955 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:24 compute-0 nova_compute[189381]: 2025-11-25 11:19:24.617 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:27 compute-0 podman[261540]: 2025-11-25 11:19:27.98526671 +0000 UTC m=+0.096930601 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:19:28 compute-0 nova_compute[189381]: 2025-11-25 11:19:28.959 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:29 compute-0 nova_compute[189381]: 2025-11-25 11:19:29.623 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:29 compute-0 podman[203557]: time="2025-11-25T11:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:19:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:19:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Nov 25 11:19:31 compute-0 openstack_network_exporter[205722]: ERROR   11:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:19:31 compute-0 openstack_network_exporter[205722]: ERROR   11:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:19:31 compute-0 openstack_network_exporter[205722]: ERROR   11:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:19:31 compute-0 openstack_network_exporter[205722]: ERROR   11:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:19:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:19:31 compute-0 openstack_network_exporter[205722]: ERROR   11:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:19:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:19:32 compute-0 nova_compute[189381]: 2025-11-25 11:19:32.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:32 compute-0 nova_compute[189381]: 2025-11-25 11:19:32.023 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 25 11:19:33 compute-0 nova_compute[189381]: 2025-11-25 11:19:33.960 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:34 compute-0 nova_compute[189381]: 2025-11-25 11:19:34.625 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:19:36.086 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:19:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:19:36.086 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:19:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:19:36.086 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:19:37 compute-0 podman[261565]: 2025-11-25 11:19:37.960353688 +0000 UTC m=+0.072259124 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 25 11:19:38 compute-0 podman[261566]: 2025-11-25 11:19:38.001843358 +0000 UTC m=+0.106616139 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 25 11:19:38 compute-0 nova_compute[189381]: 2025-11-25 11:19:38.964 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:39 compute-0 nova_compute[189381]: 2025-11-25 11:19:39.628 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:40 compute-0 podman[261604]: 2025-11-25 11:19:40.21906327 +0000 UTC m=+0.073707736 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, release-0.7.12=, architecture=x86_64, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git)
Nov 25 11:19:43 compute-0 nova_compute[189381]: 2025-11-25 11:19:43.967 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:44 compute-0 nova_compute[189381]: 2025-11-25 11:19:44.630 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:45 compute-0 nova_compute[189381]: 2025-11-25 11:19:45.033 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:45 compute-0 ovn_controller[97779]: 2025-11-25T11:19:45Z|00215|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 25 11:19:46 compute-0 podman[261624]: 2025-11-25 11:19:46.970893838 +0000 UTC m=+0.084091404 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 25 11:19:47 compute-0 nova_compute[189381]: 2025-11-25 11:19:47.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:48 compute-0 nova_compute[189381]: 2025-11-25 11:19:48.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:48 compute-0 nova_compute[189381]: 2025-11-25 11:19:48.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:19:48 compute-0 nova_compute[189381]: 2025-11-25 11:19:48.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:19:48 compute-0 nova_compute[189381]: 2025-11-25 11:19:48.035 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 11:19:48 compute-0 nova_compute[189381]: 2025-11-25 11:19:48.969 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.048 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.049 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.370 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.371 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5360MB free_disk=72.12825775146484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.371 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.372 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.452 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.453 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.486 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.516 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.517 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.518 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:19:49 compute-0 nova_compute[189381]: 2025-11-25 11:19:49.632 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:49 compute-0 podman[261644]: 2025-11-25 11:19:49.949610763 +0000 UTC m=+0.065230412 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 25 11:19:49 compute-0 podman[261643]: 2025-11-25 11:19:49.953785332 +0000 UTC m=+0.072206502 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64)
Nov 25 11:19:52 compute-0 nova_compute[189381]: 2025-11-25 11:19:52.518 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:52 compute-0 podman[261683]: 2025-11-25 11:19:52.947300633 +0000 UTC m=+0.063919745 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:19:52 compute-0 podman[261682]: 2025-11-25 11:19:52.988298949 +0000 UTC m=+0.107329690 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 25 11:19:53 compute-0 nova_compute[189381]: 2025-11-25 11:19:53.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:53 compute-0 nova_compute[189381]: 2025-11-25 11:19:53.971 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:54 compute-0 nova_compute[189381]: 2025-11-25 11:19:54.634 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:57 compute-0 nova_compute[189381]: 2025-11-25 11:19:57.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:57 compute-0 nova_compute[189381]: 2025-11-25 11:19:57.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:19:58 compute-0 nova_compute[189381]: 2025-11-25 11:19:58.016 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:19:58 compute-0 podman[261723]: 2025-11-25 11:19:58.941635672 +0000 UTC m=+0.060261829 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 25 11:19:58 compute-0 nova_compute[189381]: 2025-11-25 11:19:58.973 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:59 compute-0 nova_compute[189381]: 2025-11-25 11:19:59.637 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:19:59 compute-0 podman[203557]: time="2025-11-25T11:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:19:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:19:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4344 "" "Go-http-client/1.1"
Nov 25 11:20:01 compute-0 nova_compute[189381]: 2025-11-25 11:20:01.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:01 compute-0 nova_compute[189381]: 2025-11-25 11:20:01.022 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 25 11:20:01 compute-0 nova_compute[189381]: 2025-11-25 11:20:01.049 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 25 11:20:01 compute-0 nova_compute[189381]: 2025-11-25 11:20:01.049 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:01 compute-0 openstack_network_exporter[205722]: ERROR   11:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:20:01 compute-0 openstack_network_exporter[205722]: ERROR   11:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:20:01 compute-0 openstack_network_exporter[205722]: ERROR   11:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:20:01 compute-0 openstack_network_exporter[205722]: ERROR   11:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:20:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:20:01 compute-0 openstack_network_exporter[205722]: ERROR   11:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:20:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:20:03 compute-0 nova_compute[189381]: 2025-11-25 11:20:03.976 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:04 compute-0 nova_compute[189381]: 2025-11-25 11:20:04.639 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:05 compute-0 nova_compute[189381]: 2025-11-25 11:20:05.071 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:08 compute-0 podman[261747]: 2025-11-25 11:20:08.958255973 +0000 UTC m=+0.073811478 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:20:08 compute-0 podman[261746]: 2025-11-25 11:20:08.978889795 +0000 UTC m=+0.098103835 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 25 11:20:08 compute-0 nova_compute[189381]: 2025-11-25 11:20:08.981 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:09 compute-0 nova_compute[189381]: 2025-11-25 11:20:09.642 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:10 compute-0 podman[261783]: 2025-11-25 11:20:10.94699696 +0000 UTC m=+0.062162594 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 25 11:20:13 compute-0 nova_compute[189381]: 2025-11-25 11:20:13.979 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:14 compute-0 nova_compute[189381]: 2025-11-25 11:20:14.644 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:17 compute-0 podman[261803]: 2025-11-25 11:20:17.957427647 +0000 UTC m=+0.069223097 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 25 11:20:18 compute-0 nova_compute[189381]: 2025-11-25 11:20:18.981 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:19 compute-0 nova_compute[189381]: 2025-11-25 11:20:19.647 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:20 compute-0 podman[261824]: 2025-11-25 11:20:20.984958052 +0000 UTC m=+0.094864302 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:20:20 compute-0 podman[261823]: 2025-11-25 11:20:20.994677401 +0000 UTC m=+0.110970624 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 25 11:20:23 compute-0 podman[261869]: 2025-11-25 11:20:23.949253525 +0000 UTC m=+0.064167872 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 25 11:20:23 compute-0 podman[261868]: 2025-11-25 11:20:23.982398536 +0000 UTC m=+0.101248966 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 25 11:20:23 compute-0 nova_compute[189381]: 2025-11-25 11:20:23.982 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:24 compute-0 nova_compute[189381]: 2025-11-25 11:20:24.650 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:28 compute-0 nova_compute[189381]: 2025-11-25 11:20:28.984 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:29 compute-0 nova_compute[189381]: 2025-11-25 11:20:29.653 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:29 compute-0 podman[203557]: time="2025-11-25T11:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:20:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:20:29 compute-0 podman[203557]: @ - - [25/Nov/2025:11:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4343 "" "Go-http-client/1.1"
Nov 25 11:20:29 compute-0 podman[261909]: 2025-11-25 11:20:29.951874382 +0000 UTC m=+0.063048670 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 25 11:20:31 compute-0 openstack_network_exporter[205722]: ERROR   11:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:20:31 compute-0 openstack_network_exporter[205722]: ERROR   11:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:20:31 compute-0 openstack_network_exporter[205722]: ERROR   11:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:20:31 compute-0 openstack_network_exporter[205722]: ERROR   11:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:20:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:20:31 compute-0 openstack_network_exporter[205722]: ERROR   11:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:20:31 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:20:33 compute-0 nova_compute[189381]: 2025-11-25 11:20:33.987 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:34 compute-0 nova_compute[189381]: 2025-11-25 11:20:34.656 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:20:36.086 106634 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:20:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:20:36.088 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:20:36 compute-0 ovn_metadata_agent[106629]: 2025-11-25 11:20:36.088 106634 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:20:38 compute-0 nova_compute[189381]: 2025-11-25 11:20:38.989 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:39 compute-0 nova_compute[189381]: 2025-11-25 11:20:39.658 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:39 compute-0 podman[261933]: 2025-11-25 11:20:39.94822671 +0000 UTC m=+0.063344958 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 25 11:20:39 compute-0 podman[261934]: 2025-11-25 11:20:39.966047771 +0000 UTC m=+0.077367400 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 25 11:20:41 compute-0 podman[261972]: 2025-11-25 11:20:41.980289239 +0000 UTC m=+0.094187062 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, config_id=edpm, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 25 11:20:43 compute-0 nova_compute[189381]: 2025-11-25 11:20:43.990 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:44 compute-0 nova_compute[189381]: 2025-11-25 11:20:44.662 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:45 compute-0 nova_compute[189381]: 2025-11-25 11:20:45.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:48 compute-0 podman[261991]: 2025-11-25 11:20:48.962129396 +0000 UTC m=+0.078025649 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 25 11:20:48 compute-0 nova_compute[189381]: 2025-11-25 11:20:48.993 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.020 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.049 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.050 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.050 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.369 189385 WARNING nova.virt.libvirt.driver [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.370 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5357MB free_disk=72.12825775146484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.370 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.370 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.665 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.686 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.686 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.815 189385 DEBUG nova.compute.provider_tree [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed in ProviderTree for provider: a660730c-fa97-4a71-acf8-b1f3eef924ba update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.827 189385 DEBUG nova.scheduler.client.report [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Inventory has not changed for provider a660730c-fa97-4a71-acf8-b1f3eef924ba based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.828 189385 DEBUG nova.compute.resource_tracker [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 25 11:20:49 compute-0 nova_compute[189381]: 2025-11-25 11:20:49.828 189385 DEBUG oslo_concurrency.lockutils [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 25 11:20:50 compute-0 nova_compute[189381]: 2025-11-25 11:20:50.829 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:50 compute-0 nova_compute[189381]: 2025-11-25 11:20:50.830 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 25 11:20:50 compute-0 nova_compute[189381]: 2025-11-25 11:20:50.830 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 25 11:20:50 compute-0 nova_compute[189381]: 2025-11-25 11:20:50.844 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 25 11:20:51 compute-0 podman[262012]: 2025-11-25 11:20:51.954449722 +0000 UTC m=+0.069321690 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 25 11:20:51 compute-0 podman[262011]: 2025-11-25 11:20:51.961925566 +0000 UTC m=+0.077979138 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Nov 25 11:20:53 compute-0 nova_compute[189381]: 2025-11-25 11:20:53.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:53 compute-0 nova_compute[189381]: 2025-11-25 11:20:53.995 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:54 compute-0 nova_compute[189381]: 2025-11-25 11:20:54.668 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:54 compute-0 podman[262052]: 2025-11-25 11:20:54.961954202 +0000 UTC m=+0.075524428 container health_status b0ca530c8d0cfc55f0806f46302a80fede3a6e806d130f8b1bb0b147e57c25d8 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:20:54 compute-0 podman[262051]: 2025-11-25 11:20:54.996603756 +0000 UTC m=+0.113371953 container health_status 5fca4257651ecb2d650d742bf9d9d9d81e6d70fdd2261040a5181a8f43e8c022 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 25 11:20:55 compute-0 nova_compute[189381]: 2025-11-25 11:20:55.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:58 compute-0 nova_compute[189381]: 2025-11-25 11:20:58.998 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:59 compute-0 nova_compute[189381]: 2025-11-25 11:20:59.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:59 compute-0 nova_compute[189381]: 2025-11-25 11:20:59.021 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:20:59 compute-0 nova_compute[189381]: 2025-11-25 11:20:59.021 189385 DEBUG nova.compute.manager [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 25 11:20:59 compute-0 nova_compute[189381]: 2025-11-25 11:20:59.671 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:20:59 compute-0 podman[203557]: time="2025-11-25T11:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 25 11:20:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 25 11:20:59 compute-0 podman[203557]: @ - - [25/Nov/2025:11:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4345 "" "Go-http-client/1.1"
Nov 25 11:21:00 compute-0 podman[262095]: 2025-11-25 11:21:00.94683466 +0000 UTC m=+0.061691420 container health_status ee32716a2812ae61370c928af2264156df823bdda2099d1bdd6eaaf64ede5030 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 25 11:21:01 compute-0 openstack_network_exporter[205722]: ERROR   11:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 25 11:21:01 compute-0 openstack_network_exporter[205722]: ERROR   11:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:21:01 compute-0 openstack_network_exporter[205722]: ERROR   11:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 25 11:21:01 compute-0 openstack_network_exporter[205722]: ERROR   11:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 25 11:21:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:21:01 compute-0 openstack_network_exporter[205722]: ERROR   11:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 25 11:21:01 compute-0 openstack_network_exporter[205722]: 
Nov 25 11:21:01 compute-0 sshd-session[262119]: Accepted publickey for zuul from 192.168.122.10 port 56568 ssh2: ECDSA SHA256:yx/yYg6PTWXSvFeD19SSU+0WfwQ1qirxQGbO29m+PjY
Nov 25 11:21:01 compute-0 systemd-logind[822]: New session 32 of user zuul.
Nov 25 11:21:01 compute-0 systemd[1]: Started Session 32 of User zuul.
Nov 25 11:21:01 compute-0 sshd-session[262119]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 25 11:21:01 compute-0 sudo[262123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 25 11:21:01 compute-0 sudo[262123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.347 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.348 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f24097a3fe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086440e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f240b7182c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f2408644320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a33e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a34a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a35c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24086445f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a2660>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f24086440b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f24097a38f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f2408644140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f24097a3950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3ec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f24086441d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.356 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f2408644260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a36e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.356 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f24097a18b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f24086442f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f24097a1940>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f24097a32f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.358 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.358 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f24097a3f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f240846a420>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'cpu': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f24097a3410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f24097a3470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f24097a34d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f24097a3530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f24097a3590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f24086445c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.359 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f24097a35f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f24097a39b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f24097a18e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f24097a2210>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f24097a3650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f24097a3e90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f24097a36b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f24097a3ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f24097a3f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f2409767c20>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:03 compute-0 ceilometer_agent_compute[200081]: 2025-11-25 11:21:03.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 25 11:21:04 compute-0 nova_compute[189381]: 2025-11-25 11:21:04.000 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:04 compute-0 nova_compute[189381]: 2025-11-25 11:21:04.673 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:05 compute-0 nova_compute[189381]: 2025-11-25 11:21:05.022 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 25 11:21:07 compute-0 ovs-vsctl[262291]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 25 11:21:08 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 262147 (sos)
Nov 25 11:21:08 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 25 11:21:08 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 25 11:21:08 compute-0 virtqemud[189024]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 25 11:21:08 compute-0 virtqemud[189024]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 25 11:21:09 compute-0 nova_compute[189381]: 2025-11-25 11:21:09.002 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:09 compute-0 virtqemud[189024]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 25 11:21:09 compute-0 nova_compute[189381]: 2025-11-25 11:21:09.676 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:10 compute-0 podman[262655]: 2025-11-25 11:21:10.241351458 +0000 UTC m=+0.095818590 container health_status 8663f4ffcc7830adad417f45ea24692b4256c1c5637fb90460ff4d1c6cd43aab (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 25 11:21:10 compute-0 podman[262648]: 2025-11-25 11:21:10.274947731 +0000 UTC m=+0.130620828 container health_status 11e71f98870924af3b479341aee185ae3fbc4cdbf5ef99d1287188fdf557329d (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=3c7bc1fa2adfe9145fe93e6d3cedb844, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Nov 25 11:21:10 compute-0 crontab[262757]: (root) LIST (root)
Nov 25 11:21:12 compute-0 podman[262847]: 2025-11-25 11:21:12.993108042 +0000 UTC m=+0.100390710 container health_status ff117d62cedee6003e3dac2485a620dd1d096faa748c8f320c0573f9c73aee34 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, name=ubi9, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=)
Nov 25 11:21:13 compute-0 systemd[1]: Starting Hostname Service...
Nov 25 11:21:13 compute-0 systemd[1]: Started Hostname Service.
Nov 25 11:21:14 compute-0 nova_compute[189381]: 2025-11-25 11:21:14.003 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:14 compute-0 nova_compute[189381]: 2025-11-25 11:21:14.678 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:19 compute-0 nova_compute[189381]: 2025-11-25 11:21:19.005 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:19 compute-0 nova_compute[189381]: 2025-11-25 11:21:19.680 189385 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 25 11:21:19 compute-0 podman[263612]: 2025-11-25 11:21:19.975292928 +0000 UTC m=+0.084551316 container health_status 1813b719326143e037d6ed1a72ff16283f9dce9d7684aed89109903600639d15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 25 11:21:22 compute-0 ovs-appctl[264139]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 11:21:22 compute-0 ovs-appctl[264143]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 11:21:22 compute-0 ovs-appctl[264147]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 25 11:21:22 compute-0 podman[264269]: 2025-11-25 11:21:22.979737442 +0000 UTC m=+0.094495902 container health_status 7f7a99add085050cc3c3f5fbd02f6a180dadda134b9150da48f66087d58be7e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 25 11:21:22 compute-0 podman[264266]: 2025-11-25 11:21:22.981669007 +0000 UTC m=+0.099229887 container health_status 57c176bf13c5aa9d09135813f98f0fbcbc530d31cc8361214e8be6038c63dc7b (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, version=9.6, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7)
Nov 25 11:21:23 compute-0 nova_compute[189381]: 2025-11-25 11:21:23.015 189385 DEBUG oslo_service.periodic_task [None req-798a0e77-ec8b-47f7-93ad-70aef8006011 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
